2026-03-27 00:00:09.764858 | Job console starting 2026-03-27 00:00:09.796870 | Updating git repos 2026-03-27 00:00:09.872925 | Cloning repos into workspace 2026-03-27 00:00:10.345887 | Restoring repo states 2026-03-27 00:00:10.372164 | Merging changes 2026-03-27 00:00:10.372193 | Checking out repos 2026-03-27 00:00:11.110948 | Preparing playbooks 2026-03-27 00:00:12.853954 | Running Ansible setup 2026-03-27 00:00:22.567462 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-27 00:00:24.332978 | 2026-03-27 00:00:24.333091 | PLAY [Base pre] 2026-03-27 00:00:24.368279 | 2026-03-27 00:00:24.368389 | TASK [Setup log path fact] 2026-03-27 00:00:24.406940 | orchestrator | ok 2026-03-27 00:00:24.445011 | 2026-03-27 00:00:24.445132 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-27 00:00:24.492624 | orchestrator | ok 2026-03-27 00:00:24.501991 | 2026-03-27 00:00:24.502079 | TASK [emit-job-header : Print job information] 2026-03-27 00:00:24.600101 | # Job Information 2026-03-27 00:00:24.600238 | Ansible Version: 2.16.14 2026-03-27 00:00:24.600268 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-27 00:00:24.600296 | Pipeline: periodic-midnight 2026-03-27 00:00:24.600315 | Executor: 521e9411259a 2026-03-27 00:00:24.600333 | Triggered by: https://github.com/osism/testbed 2026-03-27 00:00:24.600351 | Event ID: a7034481b12a4abbbdb9058953750f69 2026-03-27 00:00:24.605683 | 2026-03-27 00:00:24.605767 | LOOP [emit-job-header : Print node information] 2026-03-27 00:00:24.819163 | orchestrator | ok: 2026-03-27 00:00:24.819305 | orchestrator | # Node Information 2026-03-27 00:00:24.819333 | orchestrator | Inventory Hostname: orchestrator 2026-03-27 00:00:24.819354 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-27 00:00:24.819373 | orchestrator | Username: zuul-testbed02 2026-03-27 00:00:24.819390 | orchestrator | Distro: Debian 12.13 2026-03-27 00:00:24.819409 | orchestrator | Provider: static-testbed 2026-03-27 00:00:24.819427 | orchestrator | Region: 2026-03-27 00:00:24.819444 | orchestrator | Label: testbed-orchestrator 2026-03-27 00:00:24.819460 | orchestrator | Product Name: OpenStack Nova 2026-03-27 00:00:24.819476 | orchestrator | Interface IP: 81.163.193.140 2026-03-27 00:00:24.836439 | 2026-03-27 00:00:24.836551 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-27 00:00:25.987514 | orchestrator -> localhost | changed 2026-03-27 00:00:25.994929 | 2026-03-27 00:00:25.995027 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-27 00:00:28.666710 | orchestrator -> localhost | changed 2026-03-27 00:00:28.700865 | 2026-03-27 00:00:28.700976 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-27 00:00:29.471217 | orchestrator -> localhost | ok 2026-03-27 00:00:29.478253 | 2026-03-27 00:00:29.478355 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-27 00:00:29.517476 | orchestrator | ok 2026-03-27 00:00:29.561944 | orchestrator | included: /var/lib/zuul/builds/96cbe4b924ce41fb84664617445136cc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-27 00:00:29.582279 | 2026-03-27 00:00:29.582372 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-27 00:00:33.717911 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-27 00:00:33.718116 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/96cbe4b924ce41fb84664617445136cc/work/96cbe4b924ce41fb84664617445136cc_id_rsa 2026-03-27 00:00:33.718150 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/96cbe4b924ce41fb84664617445136cc/work/96cbe4b924ce41fb84664617445136cc_id_rsa.pub 2026-03-27 00:00:33.718173 | orchestrator -> localhost | The key fingerprint is: 2026-03-27 00:00:33.718197 | orchestrator -> localhost | SHA256:RNacyR8ibY+ZxZdMH1dk3YT6SkYHdgzQzzDwarjqQMI zuul-build-sshkey 2026-03-27 00:00:33.718217 | orchestrator -> localhost | The key's randomart image is: 2026-03-27 00:00:33.718245 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-27 00:00:33.718264 | orchestrator -> localhost | | o=oB.=.*X| 2026-03-27 00:00:33.718283 | orchestrator -> localhost | | o. X.X Bo=| 2026-03-27 00:00:33.718301 | orchestrator -> localhost | | .o X.@ .| 2026-03-27 00:00:33.718318 | orchestrator -> localhost | | . . .+.= + | 2026-03-27 00:00:33.718335 | orchestrator -> localhost | | E . S o. o | 2026-03-27 00:00:33.718356 | orchestrator -> localhost | | o o o . | 2026-03-27 00:00:33.718374 | orchestrator -> localhost | | . . o . | 2026-03-27 00:00:33.718391 | orchestrator -> localhost | | . . . | 2026-03-27 00:00:33.718409 | orchestrator -> localhost | | .o | 2026-03-27 00:00:33.718427 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-27 00:00:33.718469 | orchestrator -> localhost | ok: Runtime: 0:00:02.852948 2026-03-27 00:00:33.726791 | 2026-03-27 00:00:33.726930 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-27 00:00:33.765563 | orchestrator | ok 2026-03-27 00:00:33.784356 | orchestrator | included: /var/lib/zuul/builds/96cbe4b924ce41fb84664617445136cc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-27 00:00:33.800137 | 2026-03-27 00:00:33.800226 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-27 00:00:33.864813 | orchestrator | skipping: Conditional result was False 2026-03-27 00:00:33.871832 | 2026-03-27 00:00:33.871941 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-27 00:00:34.805416 | orchestrator | changed 2026-03-27 00:00:34.815694 | 2026-03-27 00:00:34.815788 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-27 00:00:35.179626 | orchestrator | ok 2026-03-27 00:00:35.184901 | 2026-03-27 00:00:35.184990 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-27 00:00:35.687273 | orchestrator | ok 2026-03-27 00:00:35.694339 | 2026-03-27 00:00:35.694426 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-27 00:00:36.159921 | orchestrator | ok 2026-03-27 00:00:36.173682 | 2026-03-27 00:00:36.173784 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-27 00:00:36.241577 | orchestrator | skipping: Conditional result was False 2026-03-27 00:00:36.248444 | 2026-03-27 00:00:36.248652 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-27 00:00:37.203650 | orchestrator -> localhost | changed 2026-03-27 00:00:37.219313 | 2026-03-27 00:00:37.219425 | TASK [add-build-sshkey : Add back temp key] 2026-03-27 00:00:38.248049 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/96cbe4b924ce41fb84664617445136cc/work/96cbe4b924ce41fb84664617445136cc_id_rsa (zuul-build-sshkey) 2026-03-27 00:00:38.248265 | orchestrator -> localhost | ok: Runtime: 0:00:00.026317 2026-03-27 00:00:38.254357 | 2026-03-27 00:00:38.254450 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-27 00:00:38.983874 | orchestrator | ok 2026-03-27 00:00:38.990121 | 2026-03-27 00:00:38.991918 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-27 00:00:39.049900 | orchestrator | skipping: Conditional result was False 2026-03-27 00:00:39.151274 | 2026-03-27 00:00:39.151395 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-27 00:00:39.798051 | orchestrator | ok 2026-03-27 00:00:39.848013 | 2026-03-27 00:00:39.848154 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-27 00:00:39.905419 | orchestrator | ok 2026-03-27 00:00:39.927717 | 2026-03-27 00:00:39.927843 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-27 00:00:40.767323 | orchestrator -> localhost | ok 2026-03-27 00:00:40.773789 | 2026-03-27 00:00:40.773880 | TASK [validate-host : Collect information about the host] 2026-03-27 00:00:42.415104 | orchestrator | ok 2026-03-27 00:00:42.446364 | 2026-03-27 00:00:42.446482 | TASK [validate-host : Sanitize hostname] 2026-03-27 00:00:42.629877 | orchestrator | ok 2026-03-27 00:00:42.637243 | 2026-03-27 00:00:42.637347 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-27 00:00:44.186464 | orchestrator -> localhost | changed 2026-03-27 00:00:44.195706 | 2026-03-27 00:00:44.195801 | TASK [validate-host : Collect information about zuul worker] 2026-03-27 00:00:44.822474 | orchestrator | ok 2026-03-27 00:00:44.828403 | 2026-03-27 00:00:44.828508 | TASK [validate-host : Write out all zuul information for each host] 2026-03-27 00:00:46.012055 | orchestrator -> localhost | changed 2026-03-27 00:00:46.027611 | 2026-03-27 00:00:46.027706 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-27 00:00:46.360275 | orchestrator | ok 2026-03-27 00:00:46.365791 | 2026-03-27 00:00:46.365875 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-27 00:02:10.717821 | orchestrator | changed: 2026-03-27 00:02:10.718075 | orchestrator | .d..t...... src/ 2026-03-27 00:02:10.718112 | orchestrator | .d..t...... src/github.com/ 2026-03-27 00:02:10.718137 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-27 00:02:10.718158 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-27 00:02:10.718179 | orchestrator | RedHat.yml 2026-03-27 00:02:10.753838 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-27 00:02:10.753856 | orchestrator | RedHat.yml 2026-03-27 00:02:10.753912 | orchestrator | = 2.2.0"... 2026-03-27 00:02:24.824837 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-27 00:02:24.843003 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-27 00:02:24.990561 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-27 00:02:25.457327 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-27 00:02:25.527995 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-27 00:02:26.067124 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-27 00:02:26.338724 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-27 00:02:27.143061 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-27 00:02:27.143136 | orchestrator | 2026-03-27 00:02:27.143147 | orchestrator | Providers are signed by their developers. 2026-03-27 00:02:27.143157 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-27 00:02:27.143162 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-27 00:02:27.143177 | orchestrator | 2026-03-27 00:02:27.143181 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-27 00:02:27.143194 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-27 00:02:27.143198 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-27 00:02:27.143203 | orchestrator | you run "tofu init" in the future. 2026-03-27 00:02:27.143379 | orchestrator | 2026-03-27 00:02:27.143388 | orchestrator | OpenTofu has been successfully initialized! 2026-03-27 00:02:27.143411 | orchestrator | 2026-03-27 00:02:27.143416 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-27 00:02:27.143420 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-27 00:02:27.143424 | orchestrator | should now work. 2026-03-27 00:02:27.143428 | orchestrator | 2026-03-27 00:02:27.143436 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-27 00:02:27.143440 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-27 00:02:27.143448 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-27 00:02:27.306700 | orchestrator | Created and switched to workspace "ci"! 2026-03-27 00:02:27.306754 | orchestrator | 2026-03-27 00:02:27.306762 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-27 00:02:27.306768 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-27 00:02:27.306775 | orchestrator | for this configuration. 2026-03-27 00:02:27.425766 | orchestrator | ci.auto.tfvars 2026-03-27 00:02:27.777159 | orchestrator | default_custom.tf 2026-03-27 00:02:29.591048 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-27 00:02:30.189630 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-27 00:02:30.551604 | orchestrator | 2026-03-27 00:02:30.551661 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-27 00:02:30.551671 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-27 00:02:30.551702 | orchestrator | + create 2026-03-27 00:02:30.551719 | orchestrator | <= read (data resources) 2026-03-27 00:02:30.551733 | orchestrator | 2026-03-27 00:02:30.551738 | orchestrator | OpenTofu will perform the following actions: 2026-03-27 00:02:30.551915 | orchestrator | 2026-03-27 00:02:30.551933 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-27 00:02:30.551938 | orchestrator | # (config refers to values not yet known) 2026-03-27 00:02:30.551943 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-27 00:02:30.551947 | orchestrator | + checksum = (known after apply) 2026-03-27 00:02:30.551952 | orchestrator | + created_at = (known after apply) 2026-03-27 00:02:30.551956 | orchestrator | + file = (known after apply) 2026-03-27 00:02:30.551960 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.551978 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.551982 | orchestrator | + min_disk_gb = (known after apply) 2026-03-27 00:02:30.551986 | orchestrator | + min_ram_mb = (known after apply) 2026-03-27 00:02:30.551991 | orchestrator | + most_recent = true 2026-03-27 00:02:30.551995 | orchestrator | + name = (known after apply) 2026-03-27 00:02:30.551999 | orchestrator | + protected = (known after apply) 2026-03-27 00:02:30.552003 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.552009 | orchestrator | + schema = (known after apply) 2026-03-27 00:02:30.552013 | orchestrator | + size_bytes = (known after apply) 2026-03-27 00:02:30.552017 | orchestrator | + tags = (known after apply) 2026-03-27 00:02:30.552021 | orchestrator | + updated_at = (known after apply) 2026-03-27 00:02:30.552025 | orchestrator | } 2026-03-27 00:02:30.552139 | orchestrator | 2026-03-27 00:02:30.552152 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-27 00:02:30.552156 | orchestrator | # (config refers to values not yet known) 2026-03-27 00:02:30.552160 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-27 00:02:30.552164 | orchestrator | + checksum = (known after apply) 2026-03-27 00:02:30.552168 | orchestrator | + created_at = (known after apply) 2026-03-27 00:02:30.552172 | orchestrator | + file = (known after apply) 2026-03-27 00:02:30.552176 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.552180 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.552183 | orchestrator | + min_disk_gb = (known after apply) 2026-03-27 00:02:30.552188 | orchestrator | + min_ram_mb = (known after apply) 2026-03-27 00:02:30.552192 | orchestrator | + most_recent = true 2026-03-27 00:02:30.552196 | orchestrator | + name = (known after apply) 2026-03-27 00:02:30.552199 | orchestrator | + protected = (known after apply) 2026-03-27 00:02:30.552203 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.552207 | orchestrator | + schema = (known after apply) 2026-03-27 00:02:30.552211 | orchestrator | + size_bytes = (known after apply) 2026-03-27 00:02:30.552215 | orchestrator | + tags = (known after apply) 2026-03-27 00:02:30.552218 | orchestrator | + updated_at = (known after apply) 2026-03-27 00:02:30.552222 | orchestrator | } 2026-03-27 00:02:30.552328 | orchestrator | 2026-03-27 00:02:30.552342 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-27 00:02:30.552349 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-27 00:02:30.552356 | orchestrator | + content = (known after apply) 2026-03-27 00:02:30.552363 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-27 00:02:30.552369 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-27 00:02:30.552375 | orchestrator | + content_md5 = (known after apply) 2026-03-27 00:02:30.552382 | orchestrator | + content_sha1 = (known after apply) 2026-03-27 00:02:30.552388 | orchestrator | + content_sha256 = (known after apply) 2026-03-27 00:02:30.552395 | orchestrator | + content_sha512 = (known after apply) 2026-03-27 00:02:30.552402 | orchestrator | + directory_permission = "0777" 2026-03-27 00:02:30.552409 | orchestrator | + file_permission = "0644" 2026-03-27 00:02:30.552416 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-27 00:02:30.552423 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.552431 | orchestrator | } 2026-03-27 00:02:30.552575 | orchestrator | 2026-03-27 00:02:30.552595 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-27 00:02:30.552603 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-27 00:02:30.552610 | orchestrator | + content = (known after apply) 2026-03-27 00:02:30.552615 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-27 00:02:30.552621 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-27 00:02:30.552628 | orchestrator | + content_md5 = (known after apply) 2026-03-27 00:02:30.552635 | orchestrator | + content_sha1 = (known after apply) 2026-03-27 00:02:30.552641 | orchestrator | + content_sha256 = (known after apply) 2026-03-27 00:02:30.552653 | orchestrator | + content_sha512 = (known after apply) 2026-03-27 00:02:30.552658 | orchestrator | + directory_permission = "0777" 2026-03-27 00:02:30.552662 | orchestrator | + file_permission = "0644" 2026-03-27 00:02:30.552673 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-27 00:02:30.552677 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.552681 | orchestrator | } 2026-03-27 00:02:30.552784 | orchestrator | 2026-03-27 00:02:30.552796 | orchestrator | # local_file.inventory will be created 2026-03-27 00:02:30.552801 | orchestrator | + resource "local_file" "inventory" { 2026-03-27 00:02:30.552805 | orchestrator | + content = (known after apply) 2026-03-27 00:02:30.552810 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-27 00:02:30.552813 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-27 00:02:30.552818 | orchestrator | + content_md5 = (known after apply) 2026-03-27 00:02:30.552821 | orchestrator | + content_sha1 = (known after apply) 2026-03-27 00:02:30.552826 | orchestrator | + content_sha256 = (known after apply) 2026-03-27 00:02:30.552830 | orchestrator | + content_sha512 = (known after apply) 2026-03-27 00:02:30.552835 | orchestrator | + directory_permission = "0777" 2026-03-27 00:02:30.552839 | orchestrator | + file_permission = "0644" 2026-03-27 00:02:30.552843 | orchestrator | + filename = "inventory.ci" 2026-03-27 00:02:30.552847 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.552851 | orchestrator | } 2026-03-27 00:02:30.552955 | orchestrator | 2026-03-27 00:02:30.552968 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-27 00:02:30.552973 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-27 00:02:30.552976 | orchestrator | + content = (sensitive value) 2026-03-27 00:02:30.552981 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-27 00:02:30.552984 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-27 00:02:30.552989 | orchestrator | + content_md5 = (known after apply) 2026-03-27 00:02:30.552992 | orchestrator | + content_sha1 = (known after apply) 2026-03-27 00:02:30.552996 | orchestrator | + content_sha256 = (known after apply) 2026-03-27 00:02:30.553000 | orchestrator | + content_sha512 = (known after apply) 2026-03-27 00:02:30.553005 | orchestrator | + directory_permission = "0700" 2026-03-27 00:02:30.553009 | orchestrator | + file_permission = "0600" 2026-03-27 00:02:30.553013 | orchestrator | + filename = ".id_rsa.ci" 2026-03-27 00:02:30.553017 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.553021 | orchestrator | } 2026-03-27 00:02:30.553048 | orchestrator | 2026-03-27 00:02:30.553059 | orchestrator | # null_resource.node_semaphore will be created 2026-03-27 00:02:30.553064 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-27 00:02:30.553068 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.553072 | orchestrator | } 2026-03-27 00:02:30.553161 | orchestrator | 2026-03-27 00:02:30.553237 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-27 00:02:30.553243 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-27 00:02:30.553247 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.553251 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.553255 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.553259 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.553263 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.553267 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-27 00:02:30.553271 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.553275 | orchestrator | + size = 80 2026-03-27 00:02:30.553279 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.553283 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.553287 | orchestrator | } 2026-03-27 00:02:30.553373 | orchestrator | 2026-03-27 00:02:30.553385 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-27 00:02:30.553390 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-27 00:02:30.553394 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.553398 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.553402 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.553411 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.553415 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.553419 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-27 00:02:30.553423 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.553427 | orchestrator | + size = 80 2026-03-27 00:02:30.553431 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.553435 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.553439 | orchestrator | } 2026-03-27 00:02:30.553540 | orchestrator | 2026-03-27 00:02:30.553553 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-27 00:02:30.553558 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-27 00:02:30.553562 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.553566 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.553571 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.553575 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.553579 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.553583 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-27 00:02:30.553587 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.553591 | orchestrator | + size = 80 2026-03-27 00:02:30.553595 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.553599 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.553603 | orchestrator | } 2026-03-27 00:02:30.553687 | orchestrator | 2026-03-27 00:02:30.553700 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-27 00:02:30.553704 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-27 00:02:30.553708 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.553712 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.553716 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.553721 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.553725 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.553729 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-27 00:02:30.553733 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.553737 | orchestrator | + size = 80 2026-03-27 00:02:30.553745 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.553749 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.553753 | orchestrator | } 2026-03-27 00:02:30.553843 | orchestrator | 2026-03-27 00:02:30.553855 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-27 00:02:30.553860 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-27 00:02:30.553864 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.553868 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.553873 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.553877 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.553881 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.553886 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-27 00:02:30.553890 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.553894 | orchestrator | + size = 80 2026-03-27 00:02:30.553898 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.553902 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.553907 | orchestrator | } 2026-03-27 00:02:30.553996 | orchestrator | 2026-03-27 00:02:30.554008 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-27 00:02:30.554039 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-27 00:02:30.554044 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.554049 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.554053 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.554072 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.554077 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.554081 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-27 00:02:30.554085 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.554089 | orchestrator | + size = 80 2026-03-27 00:02:30.554093 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.554098 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.554102 | orchestrator | } 2026-03-27 00:02:30.554192 | orchestrator | 2026-03-27 00:02:30.554205 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-27 00:02:30.554209 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-27 00:02:30.554214 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.554218 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.554222 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.554226 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.554230 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.554234 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-27 00:02:30.554238 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.554243 | orchestrator | + size = 80 2026-03-27 00:02:30.554247 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.554251 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.554256 | orchestrator | } 2026-03-27 00:02:30.554332 | orchestrator | 2026-03-27 00:02:30.554344 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-27 00:02:30.554350 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-27 00:02:30.554354 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.554359 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.554363 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.554367 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.554371 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-27 00:02:30.554375 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.554379 | orchestrator | + size = 20 2026-03-27 00:02:30.554384 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.554388 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.554392 | orchestrator | } 2026-03-27 00:02:30.554470 | orchestrator | 2026-03-27 00:02:30.554481 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-27 00:02:30.554486 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-27 00:02:30.554490 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.554494 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.554498 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.554502 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.554507 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-27 00:02:30.554510 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.554546 | orchestrator | + size = 20 2026-03-27 00:02:30.554551 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.554555 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.554559 | orchestrator | } 2026-03-27 00:02:30.554644 | orchestrator | 2026-03-27 00:02:30.554656 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-27 00:02:30.554660 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-27 00:02:30.554665 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.554669 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.554673 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.554677 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.554681 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-27 00:02:30.554686 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.554694 | orchestrator | + size = 20 2026-03-27 00:02:30.554699 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.554703 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.554707 | orchestrator | } 2026-03-27 00:02:30.554782 | orchestrator | 2026-03-27 00:02:30.554794 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-27 00:02:30.554799 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-27 00:02:30.554802 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.554807 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.554811 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.554819 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.554823 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-27 00:02:30.554828 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.554832 | orchestrator | + size = 20 2026-03-27 00:02:30.554836 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.554840 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.554844 | orchestrator | } 2026-03-27 00:02:30.554922 | orchestrator | 2026-03-27 00:02:30.554934 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-27 00:02:30.554939 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-27 00:02:30.554943 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.554947 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.554952 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.554956 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.554960 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-27 00:02:30.554964 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.554968 | orchestrator | + size = 20 2026-03-27 00:02:30.554973 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.554977 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.554981 | orchestrator | } 2026-03-27 00:02:30.555065 | orchestrator | 2026-03-27 00:02:30.555077 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-27 00:02:30.555082 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-27 00:02:30.555086 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.555090 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.555094 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.555098 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.555102 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-27 00:02:30.555107 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.555111 | orchestrator | + size = 20 2026-03-27 00:02:30.555115 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.555119 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.555123 | orchestrator | } 2026-03-27 00:02:30.555196 | orchestrator | 2026-03-27 00:02:30.555208 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-27 00:02:30.555212 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-27 00:02:30.555217 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.555221 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.555225 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.555229 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.555234 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-27 00:02:30.555238 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.555242 | orchestrator | + size = 20 2026-03-27 00:02:30.555246 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.555251 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.555255 | orchestrator | } 2026-03-27 00:02:30.555328 | orchestrator | 2026-03-27 00:02:30.555339 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-27 00:02:30.555344 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-27 00:02:30.555352 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.555357 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.555361 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.555365 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.555370 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-27 00:02:30.555374 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.555378 | orchestrator | + size = 20 2026-03-27 00:02:30.555382 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.555386 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.555390 | orchestrator | } 2026-03-27 00:02:30.555471 | orchestrator | 2026-03-27 00:02:30.555485 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-27 00:02:30.555489 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-27 00:02:30.555493 | orchestrator | + attachment = (known after apply) 2026-03-27 00:02:30.555498 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.555502 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.555506 | orchestrator | + metadata = (known after apply) 2026-03-27 00:02:30.555510 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-27 00:02:30.555527 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.555532 | orchestrator | + size = 20 2026-03-27 00:02:30.555536 | orchestrator | + volume_retype_policy = "never" 2026-03-27 00:02:30.555540 | orchestrator | + volume_type = "ssd" 2026-03-27 00:02:30.555544 | orchestrator | } 2026-03-27 00:02:30.555817 | orchestrator | 2026-03-27 00:02:30.555836 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-27 00:02:30.555841 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-27 00:02:30.555845 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-27 00:02:30.555849 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-27 00:02:30.555853 | orchestrator | + all_metadata = (known after apply) 2026-03-27 00:02:30.555857 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.555861 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.555865 | orchestrator | + config_drive = true 2026-03-27 00:02:30.555872 | orchestrator | + created = (known after apply) 2026-03-27 00:02:30.555876 | orchestrator | + flavor_id = (known after apply) 2026-03-27 00:02:30.555880 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-27 00:02:30.555884 | orchestrator | + force_delete = false 2026-03-27 00:02:30.555888 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-27 00:02:30.555893 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.555897 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.555901 | orchestrator | + image_name = (known after apply) 2026-03-27 00:02:30.555905 | orchestrator | + key_pair = "testbed" 2026-03-27 00:02:30.555909 | orchestrator | + name = "testbed-manager" 2026-03-27 00:02:30.555913 | orchestrator | + power_state = "active" 2026-03-27 00:02:30.555917 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.555921 | orchestrator | + security_groups = (known after apply) 2026-03-27 00:02:30.555925 | orchestrator | + stop_before_destroy = false 2026-03-27 00:02:30.555929 | orchestrator | + updated = (known after apply) 2026-03-27 00:02:30.555933 | orchestrator | + user_data = (sensitive value) 2026-03-27 00:02:30.555937 | orchestrator | 2026-03-27 00:02:30.555941 | orchestrator | + block_device { 2026-03-27 00:02:30.555945 | orchestrator | + boot_index = 0 2026-03-27 00:02:30.555949 | orchestrator | + delete_on_termination = false 2026-03-27 00:02:30.555953 | orchestrator | + destination_type = "volume" 2026-03-27 00:02:30.555957 | orchestrator | + multiattach = false 2026-03-27 00:02:30.555961 | orchestrator | + source_type = "volume" 2026-03-27 00:02:30.555965 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.555974 | orchestrator | } 2026-03-27 00:02:30.555978 | orchestrator | 2026-03-27 00:02:30.555982 | orchestrator | + network { 2026-03-27 00:02:30.555986 | orchestrator | + access_network = false 2026-03-27 00:02:30.555990 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-27 00:02:30.555994 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-27 00:02:30.555998 | orchestrator | + mac = (known after apply) 2026-03-27 00:02:30.556002 | orchestrator | + name = (known after apply) 2026-03-27 00:02:30.556006 | orchestrator | + port = (known after apply) 2026-03-27 00:02:30.556010 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.556014 | orchestrator | } 2026-03-27 00:02:30.556018 | orchestrator | } 2026-03-27 00:02:30.556279 | orchestrator | 2026-03-27 00:02:30.556294 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-27 00:02:30.556299 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-27 00:02:30.556305 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-27 00:02:30.556312 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-27 00:02:30.556319 | orchestrator | + all_metadata = (known after apply) 2026-03-27 00:02:30.556324 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.556331 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.556337 | orchestrator | + config_drive = true 2026-03-27 00:02:30.556343 | orchestrator | + created = (known after apply) 2026-03-27 00:02:30.556350 | orchestrator | + flavor_id = (known after apply) 2026-03-27 00:02:30.556357 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-27 00:02:30.556363 | orchestrator | + force_delete = false 2026-03-27 00:02:30.556371 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-27 00:02:30.556376 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.556380 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.556384 | orchestrator | + image_name = (known after apply) 2026-03-27 00:02:30.556388 | orchestrator | + key_pair = "testbed" 2026-03-27 00:02:30.556392 | orchestrator | + name = "testbed-node-0" 2026-03-27 00:02:30.556396 | orchestrator | + power_state = "active" 2026-03-27 00:02:30.556400 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.556405 | orchestrator | + security_groups = (known after apply) 2026-03-27 00:02:30.556409 | orchestrator | + stop_before_destroy = false 2026-03-27 00:02:30.556414 | orchestrator | + updated = (known after apply) 2026-03-27 00:02:30.556417 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-27 00:02:30.556422 | orchestrator | 2026-03-27 00:02:30.556426 | orchestrator | + block_device { 2026-03-27 00:02:30.556430 | orchestrator | + boot_index = 0 2026-03-27 00:02:30.556434 | orchestrator | + delete_on_termination = false 2026-03-27 00:02:30.556438 | orchestrator | + destination_type = "volume" 2026-03-27 00:02:30.556443 | orchestrator | + multiattach = false 2026-03-27 00:02:30.556447 | orchestrator | + source_type = "volume" 2026-03-27 00:02:30.556451 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.556455 | orchestrator | } 2026-03-27 00:02:30.556460 | orchestrator | 2026-03-27 00:02:30.556464 | orchestrator | + network { 2026-03-27 00:02:30.556468 | orchestrator | + access_network = false 2026-03-27 00:02:30.556472 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-27 00:02:30.556476 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-27 00:02:30.556480 | orchestrator | + mac = (known after apply) 2026-03-27 00:02:30.556484 | orchestrator | + name = (known after apply) 2026-03-27 00:02:30.556488 | orchestrator | + port = (known after apply) 2026-03-27 00:02:30.556493 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.556497 | orchestrator | } 2026-03-27 00:02:30.556501 | orchestrator | } 2026-03-27 00:02:30.556820 | orchestrator | 2026-03-27 00:02:30.556850 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-27 00:02:30.556855 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-27 00:02:30.556860 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-27 00:02:30.556869 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-27 00:02:30.556874 | orchestrator | + all_metadata = (known after apply) 2026-03-27 00:02:30.556878 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.556882 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.556886 | orchestrator | + config_drive = true 2026-03-27 00:02:30.556890 | orchestrator | + created = (known after apply) 2026-03-27 00:02:30.556893 | orchestrator | + flavor_id = (known after apply) 2026-03-27 00:02:30.556897 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-27 00:02:30.556901 | orchestrator | + force_delete = false 2026-03-27 00:02:30.556905 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-27 00:02:30.556908 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.556912 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.556916 | orchestrator | + image_name = (known after apply) 2026-03-27 00:02:30.556920 | orchestrator | + key_pair = "testbed" 2026-03-27 00:02:30.556923 | orchestrator | + name = "testbed-node-1" 2026-03-27 00:02:30.556927 | orchestrator | + power_state = "active" 2026-03-27 00:02:30.556931 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.556935 | orchestrator | + security_groups = (known after apply) 2026-03-27 00:02:30.556939 | orchestrator | + stop_before_destroy = false 2026-03-27 00:02:30.556943 | orchestrator | + updated = (known after apply) 2026-03-27 00:02:30.556949 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-27 00:02:30.556953 | orchestrator | 2026-03-27 00:02:30.556957 | orchestrator | + block_device { 2026-03-27 00:02:30.556961 | orchestrator | + boot_index = 0 2026-03-27 00:02:30.556965 | orchestrator | + delete_on_termination = false 2026-03-27 00:02:30.556968 | orchestrator | + destination_type = "volume" 2026-03-27 00:02:30.556972 | orchestrator | + multiattach = false 2026-03-27 00:02:30.556976 | orchestrator | + source_type = "volume" 2026-03-27 00:02:30.556980 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.556984 | orchestrator | } 2026-03-27 00:02:30.556987 | orchestrator | 2026-03-27 00:02:30.556991 | orchestrator | + network { 2026-03-27 00:02:30.556995 | orchestrator | + access_network = false 2026-03-27 00:02:30.556999 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-27 00:02:30.557002 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-27 00:02:30.557006 | orchestrator | + mac = (known after apply) 2026-03-27 00:02:30.557010 | orchestrator | + name = (known after apply) 2026-03-27 00:02:30.557014 | orchestrator | + port = (known after apply) 2026-03-27 00:02:30.557017 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.557021 | orchestrator | } 2026-03-27 00:02:30.557025 | orchestrator | } 2026-03-27 00:02:30.562349 | orchestrator | 2026-03-27 00:02:30.562416 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-27 00:02:30.562426 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-27 00:02:30.562433 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-27 00:02:30.562440 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-27 00:02:30.562449 | orchestrator | + all_metadata = (known after apply) 2026-03-27 00:02:30.562456 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.562461 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.562465 | orchestrator | + config_drive = true 2026-03-27 00:02:30.562469 | orchestrator | + created = (known after apply) 2026-03-27 00:02:30.562473 | orchestrator | + flavor_id = (known after apply) 2026-03-27 00:02:30.562477 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-27 00:02:30.562481 | orchestrator | + force_delete = false 2026-03-27 00:02:30.562484 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-27 00:02:30.562488 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.562492 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.562510 | orchestrator | + image_name = (known after apply) 2026-03-27 00:02:30.562531 | orchestrator | + key_pair = "testbed" 2026-03-27 00:02:30.562537 | orchestrator | + name = "testbed-node-2" 2026-03-27 00:02:30.562544 | orchestrator | + power_state = "active" 2026-03-27 00:02:30.562550 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.562556 | orchestrator | + security_groups = (known after apply) 2026-03-27 00:02:30.562563 | orchestrator | + stop_before_destroy = false 2026-03-27 00:02:30.562569 | orchestrator | + updated = (known after apply) 2026-03-27 00:02:30.562575 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-27 00:02:30.562582 | orchestrator | 2026-03-27 00:02:30.562588 | orchestrator | + block_device { 2026-03-27 00:02:30.562594 | orchestrator | + boot_index = 0 2026-03-27 00:02:30.562599 | orchestrator | + delete_on_termination = false 2026-03-27 00:02:30.562605 | orchestrator | + destination_type = "volume" 2026-03-27 00:02:30.562612 | orchestrator | + multiattach = false 2026-03-27 00:02:30.562618 | orchestrator | + source_type = "volume" 2026-03-27 00:02:30.562624 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.562631 | orchestrator | } 2026-03-27 00:02:30.562637 | orchestrator | 2026-03-27 00:02:30.562644 | orchestrator | + network { 2026-03-27 00:02:30.562650 | orchestrator | + access_network = false 2026-03-27 00:02:30.562656 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-27 00:02:30.562661 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-27 00:02:30.562667 | orchestrator | + mac = (known after apply) 2026-03-27 00:02:30.562673 | orchestrator | + name = (known after apply) 2026-03-27 00:02:30.562679 | orchestrator | + port = (known after apply) 2026-03-27 00:02:30.562686 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.562692 | orchestrator | } 2026-03-27 00:02:30.562698 | orchestrator | } 2026-03-27 00:02:30.582262 | orchestrator | 2026-03-27 00:02:30.582351 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-27 00:02:30.582363 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-27 00:02:30.582370 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-27 00:02:30.582377 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-27 00:02:30.582384 | orchestrator | + all_metadata = (known after apply) 2026-03-27 00:02:30.582390 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.582397 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.582403 | orchestrator | + config_drive = true 2026-03-27 00:02:30.582410 | orchestrator | + created = (known after apply) 2026-03-27 00:02:30.582416 | orchestrator | + flavor_id = (known after apply) 2026-03-27 00:02:30.582439 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-27 00:02:30.582445 | orchestrator | + force_delete = false 2026-03-27 00:02:30.582452 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-27 00:02:30.582459 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.582465 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.582471 | orchestrator | + image_name = (known after apply) 2026-03-27 00:02:30.582477 | orchestrator | + key_pair = "testbed" 2026-03-27 00:02:30.582484 | orchestrator | + name = "testbed-node-3" 2026-03-27 00:02:30.582490 | orchestrator | + power_state = "active" 2026-03-27 00:02:30.582497 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.582503 | orchestrator | + security_groups = (known after apply) 2026-03-27 00:02:30.582509 | orchestrator | + stop_before_destroy = false 2026-03-27 00:02:30.582529 | orchestrator | + updated = (known after apply) 2026-03-27 00:02:30.582535 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-27 00:02:30.582543 | orchestrator | 2026-03-27 00:02:30.582550 | orchestrator | + block_device { 2026-03-27 00:02:30.582557 | orchestrator | + boot_index = 0 2026-03-27 00:02:30.582564 | orchestrator | + delete_on_termination = false 2026-03-27 00:02:30.582570 | orchestrator | + destination_type = "volume" 2026-03-27 00:02:30.582595 | orchestrator | + multiattach = false 2026-03-27 00:02:30.582602 | orchestrator | + source_type = "volume" 2026-03-27 00:02:30.582609 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.582616 | orchestrator | } 2026-03-27 00:02:30.582623 | orchestrator | 2026-03-27 00:02:30.582630 | orchestrator | + network { 2026-03-27 00:02:30.582637 | orchestrator | + access_network = false 2026-03-27 00:02:30.582643 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-27 00:02:30.582650 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-27 00:02:30.582657 | orchestrator | + mac = (known after apply) 2026-03-27 00:02:30.582664 | orchestrator | + name = (known after apply) 2026-03-27 00:02:30.582671 | orchestrator | + port = (known after apply) 2026-03-27 00:02:30.582678 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.582683 | orchestrator | } 2026-03-27 00:02:30.582690 | orchestrator | } 2026-03-27 00:02:30.582709 | orchestrator | 2026-03-27 00:02:30.582717 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-27 00:02:30.582724 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-27 00:02:30.582731 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-27 00:02:30.582737 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-27 00:02:30.582743 | orchestrator | + all_metadata = (known after apply) 2026-03-27 00:02:30.582750 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.582757 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.582763 | orchestrator | + config_drive = true 2026-03-27 00:02:30.582770 | orchestrator | + created = (known after apply) 2026-03-27 00:02:30.582904 | orchestrator | + flavor_id = (known after apply) 2026-03-27 00:02:30.582911 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-27 00:02:30.582918 | orchestrator | + force_delete = false 2026-03-27 00:02:30.582925 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-27 00:02:30.582932 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.582938 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.582945 | orchestrator | + image_name = (known after apply) 2026-03-27 00:02:30.582952 | orchestrator | + key_pair = "testbed" 2026-03-27 00:02:30.582958 | orchestrator | + name = "testbed-node-4" 2026-03-27 00:02:30.582965 | orchestrator | + power_state = "active" 2026-03-27 00:02:30.582971 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.582978 | orchestrator | + security_groups = (known after apply) 2026-03-27 00:02:30.582985 | orchestrator | + stop_before_destroy = false 2026-03-27 00:02:30.582992 | orchestrator | + updated = (known after apply) 2026-03-27 00:02:30.582999 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-27 00:02:30.583006 | orchestrator | 2026-03-27 00:02:30.583013 | orchestrator | + block_device { 2026-03-27 00:02:30.583019 | orchestrator | + boot_index = 0 2026-03-27 00:02:30.583026 | orchestrator | + delete_on_termination = false 2026-03-27 00:02:30.583033 | orchestrator | + destination_type = "volume" 2026-03-27 00:02:30.583039 | orchestrator | + multiattach = false 2026-03-27 00:02:30.583045 | orchestrator | + source_type = "volume" 2026-03-27 00:02:30.583053 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.583059 | orchestrator | } 2026-03-27 00:02:30.583065 | orchestrator | 2026-03-27 00:02:30.583071 | orchestrator | + network { 2026-03-27 00:02:30.583077 | orchestrator | + access_network = false 2026-03-27 00:02:30.583083 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-27 00:02:30.583089 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-27 00:02:30.583095 | orchestrator | + mac = (known after apply) 2026-03-27 00:02:30.583102 | orchestrator | + name = (known after apply) 2026-03-27 00:02:30.583107 | orchestrator | + port = (known after apply) 2026-03-27 00:02:30.583113 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.583120 | orchestrator | } 2026-03-27 00:02:30.583125 | orchestrator | } 2026-03-27 00:02:30.583144 | orchestrator | 2026-03-27 00:02:30.583150 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-27 00:02:30.583157 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-27 00:02:30.583163 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-27 00:02:30.583170 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-27 00:02:30.583176 | orchestrator | + all_metadata = (known after apply) 2026-03-27 00:02:30.583183 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.583189 | orchestrator | + availability_zone = "nova" 2026-03-27 00:02:30.583195 | orchestrator | + config_drive = true 2026-03-27 00:02:30.583201 | orchestrator | + created = (known after apply) 2026-03-27 00:02:30.583208 | orchestrator | + flavor_id = (known after apply) 2026-03-27 00:02:30.583215 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-27 00:02:30.583222 | orchestrator | + force_delete = false 2026-03-27 00:02:30.583229 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-27 00:02:30.583236 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.583243 | orchestrator | + image_id = (known after apply) 2026-03-27 00:02:30.583249 | orchestrator | + image_name = (known after apply) 2026-03-27 00:02:30.583256 | orchestrator | + key_pair = "testbed" 2026-03-27 00:02:30.583262 | orchestrator | + name = "testbed-node-5" 2026-03-27 00:02:30.583268 | orchestrator | + power_state = "active" 2026-03-27 00:02:30.583276 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.583283 | orchestrator | + security_groups = (known after apply) 2026-03-27 00:02:30.583290 | orchestrator | + stop_before_destroy = false 2026-03-27 00:02:30.583297 | orchestrator | + updated = (known after apply) 2026-03-27 00:02:30.583304 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-27 00:02:30.583311 | orchestrator | 2026-03-27 00:02:30.583317 | orchestrator | + block_device { 2026-03-27 00:02:30.583324 | orchestrator | + boot_index = 0 2026-03-27 00:02:30.583331 | orchestrator | + delete_on_termination = false 2026-03-27 00:02:30.583339 | orchestrator | + destination_type = "volume" 2026-03-27 00:02:30.583345 | orchestrator | + multiattach = false 2026-03-27 00:02:30.583353 | orchestrator | + source_type = "volume" 2026-03-27 00:02:30.583361 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.583416 | orchestrator | } 2026-03-27 00:02:30.583427 | orchestrator | 2026-03-27 00:02:30.583435 | orchestrator | + network { 2026-03-27 00:02:30.583441 | orchestrator | + access_network = false 2026-03-27 00:02:30.583448 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-27 00:02:30.583454 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-27 00:02:30.583461 | orchestrator | + mac = (known after apply) 2026-03-27 00:02:30.583468 | orchestrator | + name = (known after apply) 2026-03-27 00:02:30.583475 | orchestrator | + port = (known after apply) 2026-03-27 00:02:30.583481 | orchestrator | + uuid = (known after apply) 2026-03-27 00:02:30.583488 | orchestrator | } 2026-03-27 00:02:30.583494 | orchestrator | } 2026-03-27 00:02:30.583501 | orchestrator | 2026-03-27 00:02:30.583508 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-27 00:02:30.583558 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-27 00:02:30.583567 | orchestrator | + fingerprint = (known after apply) 2026-03-27 00:02:30.583574 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.583581 | orchestrator | + name = "testbed" 2026-03-27 00:02:30.583588 | orchestrator | + private_key = (sensitive value) 2026-03-27 00:02:30.583594 | orchestrator | + public_key = (known after apply) 2026-03-27 00:02:30.583612 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.583619 | orchestrator | + user_id = (known after apply) 2026-03-27 00:02:30.583626 | orchestrator | } 2026-03-27 00:02:30.583632 | orchestrator | 2026-03-27 00:02:30.583650 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-27 00:02:30.583657 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-27 00:02:30.583762 | orchestrator | + device = (known after apply) 2026-03-27 00:02:30.583772 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.583779 | orchestrator | + instance_id = (known after apply) 2026-03-27 00:02:30.583786 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.583800 | orchestrator | + volume_id = (known after apply) 2026-03-27 00:02:30.583808 | orchestrator | } 2026-03-27 00:02:30.583814 | orchestrator | 2026-03-27 00:02:30.583821 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-27 00:02:30.583828 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-27 00:02:30.583834 | orchestrator | + device = (known after apply) 2026-03-27 00:02:30.583841 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.583848 | orchestrator | + instance_id = (known after apply) 2026-03-27 00:02:30.583855 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.583861 | orchestrator | + volume_id = (known after apply) 2026-03-27 00:02:30.583868 | orchestrator | } 2026-03-27 00:02:30.583874 | orchestrator | 2026-03-27 00:02:30.583881 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-27 00:02:30.583888 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-27 00:02:30.583895 | orchestrator | + device = (known after apply) 2026-03-27 00:02:30.583901 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.583908 | orchestrator | + instance_id = (known after apply) 2026-03-27 00:02:30.583914 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.583921 | orchestrator | + volume_id = (known after apply) 2026-03-27 00:02:30.583927 | orchestrator | } 2026-03-27 00:02:30.583934 | orchestrator | 2026-03-27 00:02:30.583940 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-27 00:02:30.583946 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-27 00:02:30.583952 | orchestrator | + device = (known after apply) 2026-03-27 00:02:30.583959 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.583966 | orchestrator | + instance_id = (known after apply) 2026-03-27 00:02:30.583972 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.583979 | orchestrator | + volume_id = (known after apply) 2026-03-27 00:02:30.583986 | orchestrator | } 2026-03-27 00:02:30.583993 | orchestrator | 2026-03-27 00:02:30.584001 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-27 00:02:30.584007 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-27 00:02:30.584013 | orchestrator | + device = (known after apply) 2026-03-27 00:02:30.584020 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584028 | orchestrator | + instance_id = (known after apply) 2026-03-27 00:02:30.584034 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584040 | orchestrator | + volume_id = (known after apply) 2026-03-27 00:02:30.584046 | orchestrator | } 2026-03-27 00:02:30.584052 | orchestrator | 2026-03-27 00:02:30.584059 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-27 00:02:30.584066 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-27 00:02:30.584073 | orchestrator | + device = (known after apply) 2026-03-27 00:02:30.584079 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584086 | orchestrator | + instance_id = (known after apply) 2026-03-27 00:02:30.584093 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584100 | orchestrator | + volume_id = (known after apply) 2026-03-27 00:02:30.584106 | orchestrator | } 2026-03-27 00:02:30.584113 | orchestrator | 2026-03-27 00:02:30.584120 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-27 00:02:30.584127 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-27 00:02:30.584133 | orchestrator | + device = (known after apply) 2026-03-27 00:02:30.584140 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584147 | orchestrator | + instance_id = (known after apply) 2026-03-27 00:02:30.584154 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584168 | orchestrator | + volume_id = (known after apply) 2026-03-27 00:02:30.584175 | orchestrator | } 2026-03-27 00:02:30.584182 | orchestrator | 2026-03-27 00:02:30.584190 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-27 00:02:30.584196 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-27 00:02:30.584202 | orchestrator | + device = (known after apply) 2026-03-27 00:02:30.584209 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584215 | orchestrator | + instance_id = (known after apply) 2026-03-27 00:02:30.584221 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584227 | orchestrator | + volume_id = (known after apply) 2026-03-27 00:02:30.584234 | orchestrator | } 2026-03-27 00:02:30.584240 | orchestrator | 2026-03-27 00:02:30.584246 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-27 00:02:30.584253 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-27 00:02:30.584259 | orchestrator | + device = (known after apply) 2026-03-27 00:02:30.584266 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584272 | orchestrator | + instance_id = (known after apply) 2026-03-27 00:02:30.584278 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584284 | orchestrator | + volume_id = (known after apply) 2026-03-27 00:02:30.584291 | orchestrator | } 2026-03-27 00:02:30.584298 | orchestrator | 2026-03-27 00:02:30.584304 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-27 00:02:30.584312 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-27 00:02:30.584318 | orchestrator | + fixed_ip = (known after apply) 2026-03-27 00:02:30.584325 | orchestrator | + floating_ip = (known after apply) 2026-03-27 00:02:30.584332 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584339 | orchestrator | + port_id = (known after apply) 2026-03-27 00:02:30.584345 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584352 | orchestrator | } 2026-03-27 00:02:30.584359 | orchestrator | 2026-03-27 00:02:30.584365 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-27 00:02:30.584373 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-27 00:02:30.584379 | orchestrator | + address = (known after apply) 2026-03-27 00:02:30.584386 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.584404 | orchestrator | + dns_domain = (known after apply) 2026-03-27 00:02:30.584412 | orchestrator | + dns_name = (known after apply) 2026-03-27 00:02:30.584426 | orchestrator | + fixed_ip = (known after apply) 2026-03-27 00:02:30.584433 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584440 | orchestrator | + pool = "public" 2026-03-27 00:02:30.584447 | orchestrator | + port_id = (known after apply) 2026-03-27 00:02:30.584454 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584461 | orchestrator | + subnet_id = (known after apply) 2026-03-27 00:02:30.584467 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.584474 | orchestrator | } 2026-03-27 00:02:30.584481 | orchestrator | 2026-03-27 00:02:30.584488 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-27 00:02:30.584495 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-27 00:02:30.584502 | orchestrator | + admin_state_up = (known after apply) 2026-03-27 00:02:30.584509 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.584531 | orchestrator | + availability_zone_hints = [ 2026-03-27 00:02:30.584538 | orchestrator | + "nova", 2026-03-27 00:02:30.584544 | orchestrator | ] 2026-03-27 00:02:30.584551 | orchestrator | + dns_domain = (known after apply) 2026-03-27 00:02:30.584557 | orchestrator | + external = (known after apply) 2026-03-27 00:02:30.584563 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584569 | orchestrator | + mtu = (known after apply) 2026-03-27 00:02:30.584576 | orchestrator | + name = "net-testbed-management" 2026-03-27 00:02:30.584582 | orchestrator | + port_security_enabled = (known after apply) 2026-03-27 00:02:30.584596 | orchestrator | + qos_policy_id = (known after apply) 2026-03-27 00:02:30.584602 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584608 | orchestrator | + shared = (known after apply) 2026-03-27 00:02:30.584614 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.584619 | orchestrator | + transparent_vlan = (known after apply) 2026-03-27 00:02:30.584625 | orchestrator | 2026-03-27 00:02:30.584631 | orchestrator | + segments (known after apply) 2026-03-27 00:02:30.584638 | orchestrator | } 2026-03-27 00:02:30.584644 | orchestrator | 2026-03-27 00:02:30.584650 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-27 00:02:30.584656 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-27 00:02:30.584663 | orchestrator | + admin_state_up = (known after apply) 2026-03-27 00:02:30.584669 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-27 00:02:30.584675 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-27 00:02:30.584681 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.584687 | orchestrator | + device_id = (known after apply) 2026-03-27 00:02:30.584694 | orchestrator | + device_owner = (known after apply) 2026-03-27 00:02:30.584700 | orchestrator | + dns_assignment = (known after apply) 2026-03-27 00:02:30.584706 | orchestrator | + dns_name = (known after apply) 2026-03-27 00:02:30.584712 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584718 | orchestrator | + mac_address = (known after apply) 2026-03-27 00:02:30.584725 | orchestrator | + network_id = (known after apply) 2026-03-27 00:02:30.584731 | orchestrator | + port_security_enabled = (known after apply) 2026-03-27 00:02:30.584737 | orchestrator | + qos_policy_id = (known after apply) 2026-03-27 00:02:30.584743 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584750 | orchestrator | + security_group_ids = (known after apply) 2026-03-27 00:02:30.584756 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.584762 | orchestrator | 2026-03-27 00:02:30.584769 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.584775 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-27 00:02:30.584781 | orchestrator | } 2026-03-27 00:02:30.584787 | orchestrator | 2026-03-27 00:02:30.584793 | orchestrator | + binding (known after apply) 2026-03-27 00:02:30.584799 | orchestrator | 2026-03-27 00:02:30.584806 | orchestrator | + fixed_ip { 2026-03-27 00:02:30.584812 | orchestrator | + ip_address = "192.168.16.5" 2026-03-27 00:02:30.584818 | orchestrator | + subnet_id = (known after apply) 2026-03-27 00:02:30.584824 | orchestrator | } 2026-03-27 00:02:30.584830 | orchestrator | } 2026-03-27 00:02:30.584837 | orchestrator | 2026-03-27 00:02:30.584843 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-27 00:02:30.584850 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-27 00:02:30.584856 | orchestrator | + admin_state_up = (known after apply) 2026-03-27 00:02:30.584862 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-27 00:02:30.584868 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-27 00:02:30.584874 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.584880 | orchestrator | + device_id = (known after apply) 2026-03-27 00:02:30.584887 | orchestrator | + device_owner = (known after apply) 2026-03-27 00:02:30.584893 | orchestrator | + dns_assignment = (known after apply) 2026-03-27 00:02:30.584899 | orchestrator | + dns_name = (known after apply) 2026-03-27 00:02:30.584905 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.584911 | orchestrator | + mac_address = (known after apply) 2026-03-27 00:02:30.584917 | orchestrator | + network_id = (known after apply) 2026-03-27 00:02:30.584924 | orchestrator | + port_security_enabled = (known after apply) 2026-03-27 00:02:30.584930 | orchestrator | + qos_policy_id = (known after apply) 2026-03-27 00:02:30.584936 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.584948 | orchestrator | + security_group_ids = (known after apply) 2026-03-27 00:02:30.584954 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.584960 | orchestrator | 2026-03-27 00:02:30.584966 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.584972 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-27 00:02:30.584978 | orchestrator | } 2026-03-27 00:02:30.584984 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.584990 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-27 00:02:30.584996 | orchestrator | } 2026-03-27 00:02:30.585003 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585009 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-27 00:02:30.585015 | orchestrator | } 2026-03-27 00:02:30.585021 | orchestrator | 2026-03-27 00:02:30.585028 | orchestrator | + binding (known after apply) 2026-03-27 00:02:30.585034 | orchestrator | 2026-03-27 00:02:30.585040 | orchestrator | + fixed_ip { 2026-03-27 00:02:30.585046 | orchestrator | + ip_address = "192.168.16.10" 2026-03-27 00:02:30.585052 | orchestrator | + subnet_id = (known after apply) 2026-03-27 00:02:30.585059 | orchestrator | } 2026-03-27 00:02:30.585065 | orchestrator | } 2026-03-27 00:02:30.585072 | orchestrator | 2026-03-27 00:02:30.585078 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-27 00:02:30.585084 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-27 00:02:30.585095 | orchestrator | + admin_state_up = (known after apply) 2026-03-27 00:02:30.585101 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-27 00:02:30.585114 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-27 00:02:30.585121 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.585127 | orchestrator | + device_id = (known after apply) 2026-03-27 00:02:30.585133 | orchestrator | + device_owner = (known after apply) 2026-03-27 00:02:30.585139 | orchestrator | + dns_assignment = (known after apply) 2026-03-27 00:02:30.585146 | orchestrator | + dns_name = (known after apply) 2026-03-27 00:02:30.585152 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.585159 | orchestrator | + mac_address = (known after apply) 2026-03-27 00:02:30.585165 | orchestrator | + network_id = (known after apply) 2026-03-27 00:02:30.585171 | orchestrator | + port_security_enabled = (known after apply) 2026-03-27 00:02:30.585177 | orchestrator | + qos_policy_id = (known after apply) 2026-03-27 00:02:30.585184 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.585190 | orchestrator | + security_group_ids = (known after apply) 2026-03-27 00:02:30.585196 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.585202 | orchestrator | 2026-03-27 00:02:30.585208 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585215 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-27 00:02:30.585221 | orchestrator | } 2026-03-27 00:02:30.585227 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585233 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-27 00:02:30.585240 | orchestrator | } 2026-03-27 00:02:30.585246 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585253 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-27 00:02:30.585259 | orchestrator | } 2026-03-27 00:02:30.585265 | orchestrator | 2026-03-27 00:02:30.585271 | orchestrator | + binding (known after apply) 2026-03-27 00:02:30.585277 | orchestrator | 2026-03-27 00:02:30.585284 | orchestrator | + fixed_ip { 2026-03-27 00:02:30.585290 | orchestrator | + ip_address = "192.168.16.11" 2026-03-27 00:02:30.585296 | orchestrator | + subnet_id = (known after apply) 2026-03-27 00:02:30.585302 | orchestrator | } 2026-03-27 00:02:30.585308 | orchestrator | } 2026-03-27 00:02:30.585314 | orchestrator | 2026-03-27 00:02:30.585320 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-27 00:02:30.585327 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-27 00:02:30.585333 | orchestrator | + admin_state_up = (known after apply) 2026-03-27 00:02:30.585340 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-27 00:02:30.585346 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-27 00:02:30.585353 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.585363 | orchestrator | + device_id = (known after apply) 2026-03-27 00:02:30.585370 | orchestrator | + device_owner = (known after apply) 2026-03-27 00:02:30.585376 | orchestrator | + dns_assignment = (known after apply) 2026-03-27 00:02:30.585382 | orchestrator | + dns_name = (known after apply) 2026-03-27 00:02:30.585388 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.585394 | orchestrator | + mac_address = (known after apply) 2026-03-27 00:02:30.585400 | orchestrator | + network_id = (known after apply) 2026-03-27 00:02:30.585406 | orchestrator | + port_security_enabled = (known after apply) 2026-03-27 00:02:30.585413 | orchestrator | + qos_policy_id = (known after apply) 2026-03-27 00:02:30.585420 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.585426 | orchestrator | + security_group_ids = (known after apply) 2026-03-27 00:02:30.585433 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.585439 | orchestrator | 2026-03-27 00:02:30.585445 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585452 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-27 00:02:30.585458 | orchestrator | } 2026-03-27 00:02:30.585464 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585470 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-27 00:02:30.585476 | orchestrator | } 2026-03-27 00:02:30.585483 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585489 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-27 00:02:30.585495 | orchestrator | } 2026-03-27 00:02:30.585502 | orchestrator | 2026-03-27 00:02:30.585508 | orchestrator | + binding (known after apply) 2026-03-27 00:02:30.585526 | orchestrator | 2026-03-27 00:02:30.585533 | orchestrator | + fixed_ip { 2026-03-27 00:02:30.585539 | orchestrator | + ip_address = "192.168.16.12" 2026-03-27 00:02:30.585545 | orchestrator | + subnet_id = (known after apply) 2026-03-27 00:02:30.585551 | orchestrator | } 2026-03-27 00:02:30.585558 | orchestrator | } 2026-03-27 00:02:30.585565 | orchestrator | 2026-03-27 00:02:30.585571 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-27 00:02:30.585576 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-27 00:02:30.585583 | orchestrator | + admin_state_up = (known after apply) 2026-03-27 00:02:30.585589 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-27 00:02:30.585595 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-27 00:02:30.585602 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.585608 | orchestrator | + device_id = (known after apply) 2026-03-27 00:02:30.585615 | orchestrator | + device_owner = (known after apply) 2026-03-27 00:02:30.585621 | orchestrator | + dns_assignment = (known after apply) 2026-03-27 00:02:30.585627 | orchestrator | + dns_name = (known after apply) 2026-03-27 00:02:30.585633 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.585640 | orchestrator | + mac_address = (known after apply) 2026-03-27 00:02:30.585646 | orchestrator | + network_id = (known after apply) 2026-03-27 00:02:30.585652 | orchestrator | + port_security_enabled = (known after apply) 2026-03-27 00:02:30.585658 | orchestrator | + qos_policy_id = (known after apply) 2026-03-27 00:02:30.585664 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.585670 | orchestrator | + security_group_ids = (known after apply) 2026-03-27 00:02:30.585676 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.585682 | orchestrator | 2026-03-27 00:02:30.585689 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585696 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-27 00:02:30.585702 | orchestrator | } 2026-03-27 00:02:30.585708 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585714 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-27 00:02:30.585721 | orchestrator | } 2026-03-27 00:02:30.585727 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585733 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-27 00:02:30.585739 | orchestrator | } 2026-03-27 00:02:30.585746 | orchestrator | 2026-03-27 00:02:30.585757 | orchestrator | + binding (known after apply) 2026-03-27 00:02:30.585763 | orchestrator | 2026-03-27 00:02:30.585769 | orchestrator | + fixed_ip { 2026-03-27 00:02:30.585775 | orchestrator | + ip_address = "192.168.16.13" 2026-03-27 00:02:30.585782 | orchestrator | + subnet_id = (known after apply) 2026-03-27 00:02:30.585788 | orchestrator | } 2026-03-27 00:02:30.585795 | orchestrator | } 2026-03-27 00:02:30.585801 | orchestrator | 2026-03-27 00:02:30.585807 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-27 00:02:30.585818 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-27 00:02:30.585824 | orchestrator | + admin_state_up = (known after apply) 2026-03-27 00:02:30.585831 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-27 00:02:30.585837 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-27 00:02:30.585843 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.585849 | orchestrator | + device_id = (known after apply) 2026-03-27 00:02:30.585855 | orchestrator | + device_owner = (known after apply) 2026-03-27 00:02:30.585861 | orchestrator | + dns_assignment = (known after apply) 2026-03-27 00:02:30.585868 | orchestrator | + dns_name = (known after apply) 2026-03-27 00:02:30.585878 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.585884 | orchestrator | + mac_address = (known after apply) 2026-03-27 00:02:30.585891 | orchestrator | + network_id = (known after apply) 2026-03-27 00:02:30.585897 | orchestrator | + port_security_enabled = (known after apply) 2026-03-27 00:02:30.585945 | orchestrator | + qos_policy_id = (known after apply) 2026-03-27 00:02:30.585954 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.585961 | orchestrator | + security_group_ids = (known after apply) 2026-03-27 00:02:30.585967 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.585977 | orchestrator | 2026-03-27 00:02:30.585983 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.585993 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-27 00:02:30.585999 | orchestrator | } 2026-03-27 00:02:30.586005 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.588716 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-27 00:02:30.588744 | orchestrator | } 2026-03-27 00:02:30.588748 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.588753 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-27 00:02:30.588756 | orchestrator | } 2026-03-27 00:02:30.588760 | orchestrator | 2026-03-27 00:02:30.588765 | orchestrator | + binding (known after apply) 2026-03-27 00:02:30.588769 | orchestrator | 2026-03-27 00:02:30.588773 | orchestrator | + fixed_ip { 2026-03-27 00:02:30.588777 | orchestrator | + ip_address = "192.168.16.14" 2026-03-27 00:02:30.588781 | orchestrator | + subnet_id = (known after apply) 2026-03-27 00:02:30.588784 | orchestrator | } 2026-03-27 00:02:30.588788 | orchestrator | } 2026-03-27 00:02:30.588792 | orchestrator | 2026-03-27 00:02:30.588796 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-27 00:02:30.588801 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-27 00:02:30.588805 | orchestrator | + admin_state_up = (known after apply) 2026-03-27 00:02:30.588809 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-27 00:02:30.588813 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-27 00:02:30.588817 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.588821 | orchestrator | + device_id = (known after apply) 2026-03-27 00:02:30.588824 | orchestrator | + device_owner = (known after apply) 2026-03-27 00:02:30.588828 | orchestrator | + dns_assignment = (known after apply) 2026-03-27 00:02:30.588832 | orchestrator | + dns_name = (known after apply) 2026-03-27 00:02:30.588835 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.588839 | orchestrator | + mac_address = (known after apply) 2026-03-27 00:02:30.588843 | orchestrator | + network_id = (known after apply) 2026-03-27 00:02:30.588846 | orchestrator | + port_security_enabled = (known after apply) 2026-03-27 00:02:30.588850 | orchestrator | + qos_policy_id = (known after apply) 2026-03-27 00:02:30.588862 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.588866 | orchestrator | + security_group_ids = (known after apply) 2026-03-27 00:02:30.588869 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.588873 | orchestrator | 2026-03-27 00:02:30.588877 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.588881 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-27 00:02:30.588885 | orchestrator | } 2026-03-27 00:02:30.588891 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.588897 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-27 00:02:30.588903 | orchestrator | } 2026-03-27 00:02:30.588908 | orchestrator | + allowed_address_pairs { 2026-03-27 00:02:30.588914 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-27 00:02:30.588920 | orchestrator | } 2026-03-27 00:02:30.588925 | orchestrator | 2026-03-27 00:02:30.588931 | orchestrator | + binding (known after apply) 2026-03-27 00:02:30.588936 | orchestrator | 2026-03-27 00:02:30.588941 | orchestrator | + fixed_ip { 2026-03-27 00:02:30.588947 | orchestrator | + ip_address = "192.168.16.15" 2026-03-27 00:02:30.588953 | orchestrator | + subnet_id = (known after apply) 2026-03-27 00:02:30.588959 | orchestrator | } 2026-03-27 00:02:30.588965 | orchestrator | } 2026-03-27 00:02:30.588971 | orchestrator | 2026-03-27 00:02:30.588977 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-27 00:02:30.588984 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-27 00:02:30.588991 | orchestrator | + force_destroy = false 2026-03-27 00:02:30.588996 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589003 | orchestrator | + port_id = (known after apply) 2026-03-27 00:02:30.589009 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589012 | orchestrator | + router_id = (known after apply) 2026-03-27 00:02:30.589016 | orchestrator | + subnet_id = (known after apply) 2026-03-27 00:02:30.589020 | orchestrator | } 2026-03-27 00:02:30.589025 | orchestrator | 2026-03-27 00:02:30.589031 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-27 00:02:30.589037 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-27 00:02:30.589043 | orchestrator | + admin_state_up = (known after apply) 2026-03-27 00:02:30.589049 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.589055 | orchestrator | + availability_zone_hints = [ 2026-03-27 00:02:30.589061 | orchestrator | + "nova", 2026-03-27 00:02:30.589067 | orchestrator | ] 2026-03-27 00:02:30.589074 | orchestrator | + distributed = (known after apply) 2026-03-27 00:02:30.589080 | orchestrator | + enable_snat = (known after apply) 2026-03-27 00:02:30.589086 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-27 00:02:30.589093 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-27 00:02:30.589126 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589132 | orchestrator | + name = "testbed" 2026-03-27 00:02:30.589138 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589145 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.589150 | orchestrator | 2026-03-27 00:02:30.589156 | orchestrator | + external_fixed_ip (known after apply) 2026-03-27 00:02:30.589162 | orchestrator | } 2026-03-27 00:02:30.589167 | orchestrator | 2026-03-27 00:02:30.589173 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-27 00:02:30.589180 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-27 00:02:30.589197 | orchestrator | + description = "ssh" 2026-03-27 00:02:30.589203 | orchestrator | + direction = "ingress" 2026-03-27 00:02:30.589208 | orchestrator | + ethertype = "IPv4" 2026-03-27 00:02:30.589214 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589220 | orchestrator | + port_range_max = 22 2026-03-27 00:02:30.589225 | orchestrator | + port_range_min = 22 2026-03-27 00:02:30.589231 | orchestrator | + protocol = "tcp" 2026-03-27 00:02:30.589237 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589250 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-27 00:02:30.589256 | orchestrator | + remote_group_id = (known after apply) 2026-03-27 00:02:30.589262 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-27 00:02:30.589268 | orchestrator | + security_group_id = (known after apply) 2026-03-27 00:02:30.589274 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.589279 | orchestrator | } 2026-03-27 00:02:30.589285 | orchestrator | 2026-03-27 00:02:30.589291 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-27 00:02:30.589297 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-27 00:02:30.589303 | orchestrator | + description = "wireguard" 2026-03-27 00:02:30.589309 | orchestrator | + direction = "ingress" 2026-03-27 00:02:30.589315 | orchestrator | + ethertype = "IPv4" 2026-03-27 00:02:30.589321 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589327 | orchestrator | + port_range_max = 51820 2026-03-27 00:02:30.589333 | orchestrator | + port_range_min = 51820 2026-03-27 00:02:30.589339 | orchestrator | + protocol = "udp" 2026-03-27 00:02:30.589345 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589351 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-27 00:02:30.589358 | orchestrator | + remote_group_id = (known after apply) 2026-03-27 00:02:30.589364 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-27 00:02:30.589371 | orchestrator | + security_group_id = (known after apply) 2026-03-27 00:02:30.589377 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.589383 | orchestrator | } 2026-03-27 00:02:30.589390 | orchestrator | 2026-03-27 00:02:30.589396 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-27 00:02:30.589403 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-27 00:02:30.589416 | orchestrator | + direction = "ingress" 2026-03-27 00:02:30.589422 | orchestrator | + ethertype = "IPv4" 2026-03-27 00:02:30.589428 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589435 | orchestrator | + protocol = "tcp" 2026-03-27 00:02:30.589441 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589447 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-27 00:02:30.589454 | orchestrator | + remote_group_id = (known after apply) 2026-03-27 00:02:30.589461 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-27 00:02:30.589467 | orchestrator | + security_group_id = (known after apply) 2026-03-27 00:02:30.589474 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.589481 | orchestrator | } 2026-03-27 00:02:30.589488 | orchestrator | 2026-03-27 00:02:30.589494 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-27 00:02:30.589501 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-27 00:02:30.589507 | orchestrator | + direction = "ingress" 2026-03-27 00:02:30.589527 | orchestrator | + ethertype = "IPv4" 2026-03-27 00:02:30.589535 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589542 | orchestrator | + protocol = "udp" 2026-03-27 00:02:30.589548 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589554 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-27 00:02:30.589561 | orchestrator | + remote_group_id = (known after apply) 2026-03-27 00:02:30.589568 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-27 00:02:30.589574 | orchestrator | + security_group_id = (known after apply) 2026-03-27 00:02:30.589581 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.589588 | orchestrator | } 2026-03-27 00:02:30.589594 | orchestrator | 2026-03-27 00:02:30.589600 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-27 00:02:30.589618 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-27 00:02:30.589625 | orchestrator | + direction = "ingress" 2026-03-27 00:02:30.589632 | orchestrator | + ethertype = "IPv4" 2026-03-27 00:02:30.589639 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589645 | orchestrator | + protocol = "icmp" 2026-03-27 00:02:30.589651 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589657 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-27 00:02:30.589663 | orchestrator | + remote_group_id = (known after apply) 2026-03-27 00:02:30.589669 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-27 00:02:30.589676 | orchestrator | + security_group_id = (known after apply) 2026-03-27 00:02:30.589682 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.589689 | orchestrator | } 2026-03-27 00:02:30.589695 | orchestrator | 2026-03-27 00:02:30.589700 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-27 00:02:30.589706 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-27 00:02:30.589713 | orchestrator | + direction = "ingress" 2026-03-27 00:02:30.589720 | orchestrator | + ethertype = "IPv4" 2026-03-27 00:02:30.589727 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589733 | orchestrator | + protocol = "tcp" 2026-03-27 00:02:30.589740 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589746 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-27 00:02:30.589753 | orchestrator | + remote_group_id = (known after apply) 2026-03-27 00:02:30.589760 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-27 00:02:30.589766 | orchestrator | + security_group_id = (known after apply) 2026-03-27 00:02:30.589781 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.589788 | orchestrator | } 2026-03-27 00:02:30.589795 | orchestrator | 2026-03-27 00:02:30.589801 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-27 00:02:30.589808 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-27 00:02:30.589814 | orchestrator | + direction = "ingress" 2026-03-27 00:02:30.589820 | orchestrator | + ethertype = "IPv4" 2026-03-27 00:02:30.589827 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589833 | orchestrator | + protocol = "udp" 2026-03-27 00:02:30.589840 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589847 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-27 00:02:30.589852 | orchestrator | + remote_group_id = (known after apply) 2026-03-27 00:02:30.589859 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-27 00:02:30.589866 | orchestrator | + security_group_id = (known after apply) 2026-03-27 00:02:30.589872 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.589880 | orchestrator | } 2026-03-27 00:02:30.589886 | orchestrator | 2026-03-27 00:02:30.589893 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-27 00:02:30.589900 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-27 00:02:30.589906 | orchestrator | + direction = "ingress" 2026-03-27 00:02:30.589913 | orchestrator | + ethertype = "IPv4" 2026-03-27 00:02:30.589919 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.589926 | orchestrator | + protocol = "icmp" 2026-03-27 00:02:30.589933 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.589939 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-27 00:02:30.589945 | orchestrator | + remote_group_id = (known after apply) 2026-03-27 00:02:30.589951 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-27 00:02:30.589958 | orchestrator | + security_group_id = (known after apply) 2026-03-27 00:02:30.589965 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.589977 | orchestrator | } 2026-03-27 00:02:30.589983 | orchestrator | 2026-03-27 00:02:30.589989 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-27 00:02:30.589995 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-27 00:02:30.590001 | orchestrator | + description = "vrrp" 2026-03-27 00:02:30.590007 | orchestrator | + direction = "ingress" 2026-03-27 00:02:30.590034 | orchestrator | + ethertype = "IPv4" 2026-03-27 00:02:30.590043 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.590049 | orchestrator | + protocol = "112" 2026-03-27 00:02:30.590055 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.590061 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-27 00:02:30.590068 | orchestrator | + remote_group_id = (known after apply) 2026-03-27 00:02:30.590075 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-27 00:02:30.590082 | orchestrator | + security_group_id = (known after apply) 2026-03-27 00:02:30.590089 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.590094 | orchestrator | } 2026-03-27 00:02:30.590101 | orchestrator | 2026-03-27 00:02:30.590108 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-27 00:02:30.590115 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-27 00:02:30.590122 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.590128 | orchestrator | + description = "management security group" 2026-03-27 00:02:30.590134 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.590140 | orchestrator | + name = "testbed-management" 2026-03-27 00:02:30.590146 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.590152 | orchestrator | + stateful = (known after apply) 2026-03-27 00:02:30.590159 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.590165 | orchestrator | } 2026-03-27 00:02:30.590171 | orchestrator | 2026-03-27 00:02:30.590178 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-27 00:02:30.590185 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-27 00:02:30.590191 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.590197 | orchestrator | + description = "node security group" 2026-03-27 00:02:30.590203 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.590209 | orchestrator | + name = "testbed-node" 2026-03-27 00:02:30.590216 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.590222 | orchestrator | + stateful = (known after apply) 2026-03-27 00:02:30.590229 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.590235 | orchestrator | } 2026-03-27 00:02:30.590241 | orchestrator | 2026-03-27 00:02:30.590247 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-27 00:02:30.590254 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-27 00:02:30.590260 | orchestrator | + all_tags = (known after apply) 2026-03-27 00:02:30.590267 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-27 00:02:30.590273 | orchestrator | + dns_nameservers = [ 2026-03-27 00:02:30.590280 | orchestrator | + "8.8.8.8", 2026-03-27 00:02:30.590287 | orchestrator | + "9.9.9.9", 2026-03-27 00:02:30.590292 | orchestrator | ] 2026-03-27 00:02:30.590299 | orchestrator | + enable_dhcp = true 2026-03-27 00:02:30.590306 | orchestrator | + gateway_ip = (known after apply) 2026-03-27 00:02:30.590318 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.590325 | orchestrator | + ip_version = 4 2026-03-27 00:02:30.590331 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-27 00:02:30.590337 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-27 00:02:30.590344 | orchestrator | + name = "subnet-testbed-management" 2026-03-27 00:02:30.590351 | orchestrator | + network_id = (known after apply) 2026-03-27 00:02:30.590357 | orchestrator | + no_gateway = false 2026-03-27 00:02:30.590364 | orchestrator | + region = (known after apply) 2026-03-27 00:02:30.590370 | orchestrator | + service_types = (known after apply) 2026-03-27 00:02:30.590383 | orchestrator | + tenant_id = (known after apply) 2026-03-27 00:02:30.590390 | orchestrator | 2026-03-27 00:02:30.590397 | orchestrator | + allocation_pool { 2026-03-27 00:02:30.590403 | orchestrator | + end = "192.168.31.250" 2026-03-27 00:02:30.590410 | orchestrator | + start = "192.168.31.200" 2026-03-27 00:02:30.590416 | orchestrator | } 2026-03-27 00:02:30.590422 | orchestrator | } 2026-03-27 00:02:30.590428 | orchestrator | 2026-03-27 00:02:30.590434 | orchestrator | # terraform_data.image will be created 2026-03-27 00:02:30.590440 | orchestrator | + resource "terraform_data" "image" { 2026-03-27 00:02:30.590446 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.590457 | orchestrator | + input = "Ubuntu 24.04" 2026-03-27 00:02:30.590463 | orchestrator | + output = (known after apply) 2026-03-27 00:02:30.590469 | orchestrator | } 2026-03-27 00:02:30.590475 | orchestrator | 2026-03-27 00:02:30.590481 | orchestrator | # terraform_data.image_node will be created 2026-03-27 00:02:30.590488 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-27 00:02:30.590493 | orchestrator | + id = (known after apply) 2026-03-27 00:02:30.590500 | orchestrator | + input = "Ubuntu 24.04" 2026-03-27 00:02:30.590506 | orchestrator | + output = (known after apply) 2026-03-27 00:02:30.590557 | orchestrator | } 2026-03-27 00:02:30.590569 | orchestrator | 2026-03-27 00:02:30.590575 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-27 00:02:30.590582 | orchestrator | 2026-03-27 00:02:30.590588 | orchestrator | Changes to Outputs: 2026-03-27 00:02:30.590594 | orchestrator | + manager_address = (sensitive value) 2026-03-27 00:02:30.590600 | orchestrator | + private_key = (sensitive value) 2026-03-27 00:02:30.801301 | orchestrator | terraform_data.image: Creating... 2026-03-27 00:02:30.801357 | orchestrator | terraform_data.image: Creation complete after 0s [id=1dbb2598-0d77-9c4b-8238-0266a215404e] 2026-03-27 00:02:30.801363 | orchestrator | terraform_data.image_node: Creating... 2026-03-27 00:02:30.801369 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=1289edbd-e277-24a1-72e5-677870c33d53] 2026-03-27 00:02:30.817841 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-27 00:02:30.817912 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-27 00:02:30.827859 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-27 00:02:30.827929 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-27 00:02:30.827942 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-27 00:02:30.827946 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-27 00:02:30.832694 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-27 00:02:30.832717 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-27 00:02:30.832898 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-27 00:02:30.838679 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-27 00:02:31.297081 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-27 00:02:31.306388 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-27 00:02:31.316091 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-27 00:02:31.322003 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-27 00:02:31.388496 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-27 00:02:31.395202 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-27 00:02:31.977763 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=e1b1a3d1-e5dc-4fca-b411-016f77b52297] 2026-03-27 00:02:31.987773 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-27 00:02:34.569328 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=131bb9e5-0133-49dd-b67b-125236a47022] 2026-03-27 00:02:34.579159 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-27 00:02:34.598805 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=3917e6ab-68a3-44be-970a-31d9d2a57984] 2026-03-27 00:02:34.605409 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-27 00:02:34.627838 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=62ab2900-9bbe-4288-89a4-62dba7ae92ab] 2026-03-27 00:02:34.638136 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=2796c507-44e5-4ccf-b3e2-014e00eaf9ef] 2026-03-27 00:02:34.642827 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-27 00:02:34.649670 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-27 00:02:34.683449 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=86c6402f-d184-4443-979d-ecd201841231] 2026-03-27 00:02:34.690691 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-27 00:02:34.750652 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=3878b4cc-7fe4-4758-b0af-fcf7391d431c] 2026-03-27 00:02:34.761230 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-27 00:02:34.765064 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=a8fe8ddc8ca01e208aa47d30a8d74cf990535e21] 2026-03-27 00:02:34.769856 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-27 00:02:34.780752 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=53da1fd0-572d-430c-b2ac-506bde32f617] 2026-03-27 00:02:34.789077 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-27 00:02:34.801687 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=7b5f97a992e0421c78f72feac10b5cde1a10d8f9] 2026-03-27 00:02:34.809755 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-27 00:02:34.840029 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=0ff86b74-b83b-4d7e-b564-01c0b90f308d] 2026-03-27 00:02:35.118955 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=52ce1f02-342d-40b1-ab4b-d26aefe85f26] 2026-03-27 00:02:35.386551 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=7111b8b1-4c28-4223-b706-655f7cf323ab] 2026-03-27 00:02:35.862332 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=e7c1974e-9d82-4aa2-9374-2e248b9e05ab] 2026-03-27 00:02:35.874203 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-27 00:02:38.038510 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=2f18c829-b3cd-4f22-b402-72c3edab461d] 2026-03-27 00:02:38.151815 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=4b291496-18ea-45da-96d1-ca760a1ff526] 2026-03-27 00:02:38.182424 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=6016cd75-c7c0-403c-b545-4970d85db376] 2026-03-27 00:02:38.201140 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=a468aa16-2d5a-4768-ab18-db6a6ccef41a] 2026-03-27 00:02:38.227869 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=40e5dcb4-672b-4763-9a98-56119e00a3ac] 2026-03-27 00:02:38.423095 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=967da385-7d5e-4e32-b850-70936458610b] 2026-03-27 00:02:39.450833 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=0642d090-155b-466f-b461-2931e8d71cb5] 2026-03-27 00:02:39.460388 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-27 00:02:39.461502 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-27 00:02:39.461654 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-27 00:02:39.755369 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=7c3ebfe8-de49-4bf3-80d5-1263122c1454] 2026-03-27 00:02:39.766231 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-27 00:02:39.767376 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-27 00:02:39.768108 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-27 00:02:39.773292 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-27 00:02:39.774175 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-27 00:02:39.775325 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-27 00:02:39.775612 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=44e20c98-dedc-4404-98e5-3785d60b8164] 2026-03-27 00:02:39.779618 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-27 00:02:39.779657 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-27 00:02:39.781371 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-27 00:02:39.976168 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=22feafbe-f961-4293-827d-27a711b4fd37] 2026-03-27 00:02:39.982700 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-27 00:02:40.307276 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=6f450c1b-3ab9-4597-b9e0-91f8e456acaa] 2026-03-27 00:02:40.328155 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-27 00:02:40.389511 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=56e484b0-1906-42a1-a9d2-bbf715b2b266] 2026-03-27 00:02:40.400340 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-27 00:02:40.868096 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=38e430cc-f462-400c-a926-904bc25eccda] 2026-03-27 00:02:40.874436 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=0f55f515-c3f9-4de2-aff9-e967106dc834] 2026-03-27 00:02:40.877917 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-27 00:02:40.891512 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-27 00:02:41.309926 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=9567227f-df1f-4c80-a70d-3adad4166525] 2026-03-27 00:02:41.320016 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-27 00:02:41.325297 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=2ac380f3-8b25-4640-946b-bb9f7505a14d] 2026-03-27 00:02:41.338138 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-27 00:02:41.506856 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=21ae1198-6c13-4786-9c56-59267516f02a] 2026-03-27 00:02:41.553172 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=ab30bb2d-bcf8-4df6-89b7-377c51d6542c] 2026-03-27 00:02:41.584483 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=95b65a33-7335-4b05-9a4e-100d9be88d16] 2026-03-27 00:02:41.612112 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=a50c8225-71ad-45e0-8432-d85c1fb07c3f] 2026-03-27 00:02:41.736420 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=c646f19c-f46a-47be-91e4-f179f1d2ebf1] 2026-03-27 00:02:42.016073 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=225a1a46-1228-456b-aa05-d5b238d4f6fe] 2026-03-27 00:02:42.496104 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=553abd29-85b5-4c6d-97e9-1bc5505f8182] 2026-03-27 00:02:42.968367 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=eb7a89db-1285-463c-9785-6f73d8a4e5c0] 2026-03-27 00:02:43.044994 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=da03f508-19f8-4cfa-9f1e-fbce672f9d2c] 2026-03-27 00:02:44.358080 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=f041a0cb-9db2-47ac-91ee-d566d91e387f] 2026-03-27 00:02:44.365615 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-27 00:02:44.376400 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-27 00:02:44.385055 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-27 00:02:44.395014 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-27 00:02:44.400295 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-27 00:02:44.403954 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-27 00:02:44.406492 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-27 00:02:46.642183 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=77c0ce76-31f2-4230-8527-a68dfcbffe84] 2026-03-27 00:02:46.654263 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-27 00:02:46.660227 | orchestrator | local_file.inventory: Creating... 2026-03-27 00:02:46.660391 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-27 00:02:46.664417 | orchestrator | local_file.inventory: Creation complete after 0s [id=4a7075001a3f83a0eb41c8ca94a034ba7a8fe8e3] 2026-03-27 00:02:46.666423 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=c1b5b44350b658c62a32c36f0495ef083ebf0945] 2026-03-27 00:02:48.251735 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=77c0ce76-31f2-4230-8527-a68dfcbffe84] 2026-03-27 00:02:54.378653 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-27 00:02:54.385976 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-27 00:02:54.396768 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-27 00:02:54.402999 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-27 00:02:54.406260 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-27 00:02:54.407445 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-27 00:03:04.379967 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-27 00:03:04.386225 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-27 00:03:04.397567 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-27 00:03:04.403764 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-27 00:03:04.407076 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-27 00:03:04.408296 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-27 00:03:14.389844 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-27 00:03:14.389951 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-27 00:03:14.398219 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-27 00:03:14.404683 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-27 00:03:14.408085 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-27 00:03:14.408174 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-27 00:03:24.399649 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-27 00:03:24.399794 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-27 00:03:24.399820 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-27 00:03:24.405514 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-27 00:03:24.408778 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-27 00:03:24.408880 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-27 00:03:25.196075 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=b834811c-3ef0-4c6b-87fa-34bd71c5d890] 2026-03-27 00:03:25.211189 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=6929aa0f-7054-4e04-b421-4848fd68ec2f] 2026-03-27 00:03:25.689776 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 42s [id=484d070f-6658-4f06-bff3-00f4f6cbc61f] 2026-03-27 00:03:34.408124 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-27 00:03:34.408236 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-27 00:03:34.409398 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-27 00:03:35.359446 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=c8bdc852-3c78-45e9-a6ea-4c59136c0764] 2026-03-27 00:03:44.415595 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-03-27 00:03:44.415694 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-03-27 00:03:45.288355 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m1s [id=2af03ab0-5fde-41d7-b37f-5868d6c2543e] 2026-03-27 00:03:45.355385 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m1s [id=d427c5b2-75fa-4c71-9c81-4bfb0d85783d] 2026-03-27 00:03:45.386358 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-27 00:03:45.393798 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-27 00:03:45.416879 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-27 00:03:45.418657 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-27 00:03:45.426587 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-27 00:03:45.454068 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7990399044677388803] 2026-03-27 00:03:45.454114 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-27 00:03:45.454119 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-27 00:03:45.454123 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-27 00:03:45.454127 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-27 00:03:45.454139 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-27 00:03:45.532629 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-27 00:03:48.964293 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=6929aa0f-7054-4e04-b421-4848fd68ec2f/2796c507-44e5-4ccf-b3e2-014e00eaf9ef] 2026-03-27 00:03:48.973341 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=c8bdc852-3c78-45e9-a6ea-4c59136c0764/3917e6ab-68a3-44be-970a-31d9d2a57984] 2026-03-27 00:03:49.351933 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=d427c5b2-75fa-4c71-9c81-4bfb0d85783d/52ce1f02-342d-40b1-ab4b-d26aefe85f26] 2026-03-27 00:03:52.933490 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 8s [id=6929aa0f-7054-4e04-b421-4848fd68ec2f/86c6402f-d184-4443-979d-ecd201841231] 2026-03-27 00:03:55.054109 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=6929aa0f-7054-4e04-b421-4848fd68ec2f/131bb9e5-0133-49dd-b67b-125236a47022] 2026-03-27 00:03:55.180914 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=c8bdc852-3c78-45e9-a6ea-4c59136c0764/53da1fd0-572d-430c-b2ac-506bde32f617] 2026-03-27 00:03:55.185629 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=d427c5b2-75fa-4c71-9c81-4bfb0d85783d/0ff86b74-b83b-4d7e-b564-01c0b90f308d] 2026-03-27 00:03:55.451747 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Still creating... [10s elapsed] 2026-03-27 00:03:55.453977 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Still creating... [10s elapsed] 2026-03-27 00:03:55.533770 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-27 00:03:55.830888 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 11s [id=d427c5b2-75fa-4c71-9c81-4bfb0d85783d/62ab2900-9bbe-4288-89a4-62dba7ae92ab] 2026-03-27 00:03:57.693793 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 13s [id=c8bdc852-3c78-45e9-a6ea-4c59136c0764/3878b4cc-7fe4-4758-b0af-fcf7391d431c] 2026-03-27 00:04:05.539291 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-27 00:04:06.208858 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=adb73076-581c-462e-a30f-a27c6b56aeb6] 2026-03-27 00:04:06.227549 | orchestrator | 2026-03-27 00:04:06.227656 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-27 00:04:06.227663 | orchestrator | 2026-03-27 00:04:06.227668 | orchestrator | Outputs: 2026-03-27 00:04:06.227673 | orchestrator | 2026-03-27 00:04:06.227691 | orchestrator | manager_address = 2026-03-27 00:04:06.227701 | orchestrator | private_key = 2026-03-27 00:04:06.543072 | orchestrator | ok: Runtime: 0:01:41.629572 2026-03-27 00:04:06.566403 | 2026-03-27 00:04:06.566544 | TASK [Create infrastructure (stable)] 2026-03-27 00:04:07.100507 | orchestrator | skipping: Conditional result was False 2026-03-27 00:04:07.113817 | 2026-03-27 00:04:07.114105 | TASK [Fetch manager address] 2026-03-27 00:04:07.619020 | orchestrator | ok 2026-03-27 00:04:07.627894 | 2026-03-27 00:04:07.628017 | TASK [Set manager_host address] 2026-03-27 00:04:07.709248 | orchestrator | ok 2026-03-27 00:04:07.721264 | 2026-03-27 00:04:07.721444 | LOOP [Update ansible collections] 2026-03-27 00:04:08.783960 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-27 00:04:08.784280 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-27 00:04:08.784343 | orchestrator | Starting galaxy collection install process 2026-03-27 00:04:08.784385 | orchestrator | Process install dependency map 2026-03-27 00:04:08.784422 | orchestrator | Starting collection install process 2026-03-27 00:04:08.784458 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-03-27 00:04:08.784504 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-03-27 00:04:08.784545 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-27 00:04:08.784628 | orchestrator | ok: Item: commons Runtime: 0:00:00.694167 2026-03-27 00:04:09.779414 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-27 00:04:09.779659 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-27 00:04:09.779713 | orchestrator | Starting galaxy collection install process 2026-03-27 00:04:09.779749 | orchestrator | Process install dependency map 2026-03-27 00:04:09.779781 | orchestrator | Starting collection install process 2026-03-27 00:04:09.779810 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-03-27 00:04:09.779839 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-03-27 00:04:09.779867 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-27 00:04:09.779913 | orchestrator | ok: Item: services Runtime: 0:00:00.726064 2026-03-27 00:04:09.805512 | 2026-03-27 00:04:09.805782 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-27 00:04:20.467993 | orchestrator | ok 2026-03-27 00:04:20.479120 | 2026-03-27 00:04:20.479259 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-27 00:05:20.534995 | orchestrator | ok 2026-03-27 00:05:20.546703 | 2026-03-27 00:05:20.546860 | TASK [Fetch manager ssh hostkey] 2026-03-27 00:05:22.126338 | orchestrator | Output suppressed because no_log was given 2026-03-27 00:05:22.147648 | 2026-03-27 00:05:22.147884 | TASK [Get ssh keypair from terraform environment] 2026-03-27 00:05:22.693710 | orchestrator | ok: Runtime: 0:00:00.007655 2026-03-27 00:05:22.715141 | 2026-03-27 00:05:22.715411 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-27 00:05:22.766718 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-27 00:05:22.777896 | 2026-03-27 00:05:22.778033 | TASK [Run manager part 0] 2026-03-27 00:05:23.918375 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-27 00:05:23.970040 | orchestrator | 2026-03-27 00:05:23.970087 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-27 00:05:23.970094 | orchestrator | 2026-03-27 00:05:23.970106 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-27 00:05:25.774176 | orchestrator | ok: [testbed-manager] 2026-03-27 00:05:25.774234 | orchestrator | 2026-03-27 00:05:25.774264 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-27 00:05:25.774277 | orchestrator | 2026-03-27 00:05:25.774290 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 00:05:27.622194 | orchestrator | ok: [testbed-manager] 2026-03-27 00:05:27.622226 | orchestrator | 2026-03-27 00:05:27.622235 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-27 00:05:28.233085 | orchestrator | ok: [testbed-manager] 2026-03-27 00:05:28.233120 | orchestrator | 2026-03-27 00:05:28.233130 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-27 00:05:28.264236 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:05:28.264266 | orchestrator | 2026-03-27 00:05:28.264278 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-27 00:05:28.294997 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:05:28.295027 | orchestrator | 2026-03-27 00:05:28.295037 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-27 00:05:28.334125 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:05:28.334159 | orchestrator | 2026-03-27 00:05:28.334167 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-27 00:05:29.004107 | orchestrator | changed: [testbed-manager] 2026-03-27 00:05:29.004150 | orchestrator | 2026-03-27 00:05:29.004159 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-27 00:08:21.573533 | orchestrator | changed: [testbed-manager] 2026-03-27 00:08:21.573642 | orchestrator | 2026-03-27 00:08:21.573661 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-27 00:09:42.358968 | orchestrator | changed: [testbed-manager] 2026-03-27 00:09:42.359013 | orchestrator | 2026-03-27 00:09:42.359023 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-27 00:10:05.173658 | orchestrator | changed: [testbed-manager] 2026-03-27 00:10:05.173761 | orchestrator | 2026-03-27 00:10:05.173780 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-27 00:10:13.552713 | orchestrator | changed: [testbed-manager] 2026-03-27 00:10:13.552816 | orchestrator | 2026-03-27 00:10:13.552834 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-27 00:10:13.598554 | orchestrator | ok: [testbed-manager] 2026-03-27 00:10:13.598642 | orchestrator | 2026-03-27 00:10:13.598663 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-27 00:10:14.418386 | orchestrator | ok: [testbed-manager] 2026-03-27 00:10:14.418506 | orchestrator | 2026-03-27 00:10:14.418533 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-27 00:10:15.171085 | orchestrator | changed: [testbed-manager] 2026-03-27 00:10:15.171173 | orchestrator | 2026-03-27 00:10:15.171192 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-27 00:10:21.336757 | orchestrator | changed: [testbed-manager] 2026-03-27 00:10:21.336847 | orchestrator | 2026-03-27 00:10:21.336862 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-27 00:10:27.279290 | orchestrator | changed: [testbed-manager] 2026-03-27 00:10:27.279395 | orchestrator | 2026-03-27 00:10:27.279408 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-27 00:10:29.764905 | orchestrator | changed: [testbed-manager] 2026-03-27 00:10:29.765007 | orchestrator | 2026-03-27 00:10:29.765031 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-27 00:10:31.522378 | orchestrator | changed: [testbed-manager] 2026-03-27 00:10:31.522423 | orchestrator | 2026-03-27 00:10:31.522449 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-27 00:10:32.606217 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-27 00:10:32.606560 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-27 00:10:32.606583 | orchestrator | 2026-03-27 00:10:32.606597 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-27 00:10:32.649748 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-27 00:10:32.649797 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-27 00:10:32.649803 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-27 00:10:32.649809 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-27 00:10:39.196154 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-27 00:10:39.196245 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-27 00:10:39.196259 | orchestrator | 2026-03-27 00:10:39.196272 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-27 00:10:39.762630 | orchestrator | changed: [testbed-manager] 2026-03-27 00:10:39.762721 | orchestrator | 2026-03-27 00:10:39.762737 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-27 00:13:00.767658 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-27 00:13:00.767776 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-27 00:13:00.767792 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-27 00:13:00.767803 | orchestrator | 2026-03-27 00:13:00.767815 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-27 00:13:03.047926 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-27 00:13:03.048008 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-27 00:13:03.048022 | orchestrator | 2026-03-27 00:13:03.048037 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-27 00:13:03.048049 | orchestrator | 2026-03-27 00:13:03.048061 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 00:13:04.419132 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:04.419214 | orchestrator | 2026-03-27 00:13:04.419232 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-27 00:13:04.463669 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:04.463712 | orchestrator | 2026-03-27 00:13:04.463720 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-27 00:13:04.541802 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:04.541843 | orchestrator | 2026-03-27 00:13:04.541851 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-27 00:13:05.333734 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:05.333773 | orchestrator | 2026-03-27 00:13:05.333781 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-27 00:13:06.044055 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:06.044125 | orchestrator | 2026-03-27 00:13:06.044134 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-27 00:13:07.364483 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-27 00:13:07.364530 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-27 00:13:07.364539 | orchestrator | 2026-03-27 00:13:07.364548 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-27 00:13:08.813081 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:08.813352 | orchestrator | 2026-03-27 00:13:08.813369 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-27 00:13:10.551781 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-27 00:13:10.551833 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-27 00:13:10.551849 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-27 00:13:10.551853 | orchestrator | 2026-03-27 00:13:10.551859 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-27 00:13:10.616376 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:10.616427 | orchestrator | 2026-03-27 00:13:10.616432 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-27 00:13:10.698641 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:10.698686 | orchestrator | 2026-03-27 00:13:10.698692 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-27 00:13:11.239164 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:11.239228 | orchestrator | 2026-03-27 00:13:11.239238 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-27 00:13:11.311570 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:11.311619 | orchestrator | 2026-03-27 00:13:11.311626 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-27 00:13:12.175027 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-27 00:13:12.175069 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:12.175076 | orchestrator | 2026-03-27 00:13:12.175080 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-27 00:13:12.213669 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:12.213742 | orchestrator | 2026-03-27 00:13:12.213754 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-27 00:13:12.248395 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:12.248513 | orchestrator | 2026-03-27 00:13:12.248545 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-27 00:13:12.305649 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:12.305749 | orchestrator | 2026-03-27 00:13:12.305772 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-27 00:13:12.383074 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:12.383125 | orchestrator | 2026-03-27 00:13:12.383134 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-27 00:13:13.115266 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:13.115311 | orchestrator | 2026-03-27 00:13:13.115316 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-27 00:13:13.115321 | orchestrator | 2026-03-27 00:13:13.115327 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 00:13:14.645406 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:14.645450 | orchestrator | 2026-03-27 00:13:14.645455 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-27 00:13:15.578570 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:15.578621 | orchestrator | 2026-03-27 00:13:15.578627 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:13:15.578633 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-27 00:13:15.578637 | orchestrator | 2026-03-27 00:13:16.117689 | orchestrator | ok: Runtime: 0:07:52.555434 2026-03-27 00:13:16.135032 | 2026-03-27 00:13:16.135186 | TASK [Point out that the log in on the manager is now possible] 2026-03-27 00:13:16.177759 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-27 00:13:16.190760 | 2026-03-27 00:13:16.190939 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-27 00:13:16.228778 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-27 00:13:16.238912 | 2026-03-27 00:13:16.239067 | TASK [Run manager part 1 + 2] 2026-03-27 00:13:17.998319 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-27 00:13:18.085481 | orchestrator | 2026-03-27 00:13:18.085594 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-27 00:13:18.085618 | orchestrator | 2026-03-27 00:13:18.085676 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 00:13:21.002675 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:21.002728 | orchestrator | 2026-03-27 00:13:21.002750 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-27 00:13:21.037090 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:21.037143 | orchestrator | 2026-03-27 00:13:21.037154 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-27 00:13:21.072323 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:21.072378 | orchestrator | 2026-03-27 00:13:21.072389 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-27 00:13:21.118829 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:21.118880 | orchestrator | 2026-03-27 00:13:21.118888 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-27 00:13:21.185888 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:21.185939 | orchestrator | 2026-03-27 00:13:21.185947 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-27 00:13:21.243759 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:21.243815 | orchestrator | 2026-03-27 00:13:21.243823 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-27 00:13:21.282923 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-27 00:13:21.282978 | orchestrator | 2026-03-27 00:13:21.282986 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-27 00:13:22.058143 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:22.058195 | orchestrator | 2026-03-27 00:13:22.058203 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-27 00:13:22.103729 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:22.103814 | orchestrator | 2026-03-27 00:13:22.103827 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-27 00:13:23.500027 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:23.500164 | orchestrator | 2026-03-27 00:13:23.500178 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-27 00:13:24.075344 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:24.075421 | orchestrator | 2026-03-27 00:13:24.075437 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-27 00:13:25.142984 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:25.143031 | orchestrator | 2026-03-27 00:13:25.143039 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-27 00:13:39.354636 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:39.354743 | orchestrator | 2026-03-27 00:13:39.354760 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-27 00:13:39.974767 | orchestrator | ok: [testbed-manager] 2026-03-27 00:13:39.974924 | orchestrator | 2026-03-27 00:13:39.974945 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-27 00:13:40.026493 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:40.026580 | orchestrator | 2026-03-27 00:13:40.026599 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-27 00:13:40.901125 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:40.901270 | orchestrator | 2026-03-27 00:13:40.901318 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-27 00:13:41.770312 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:41.770383 | orchestrator | 2026-03-27 00:13:41.770395 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-27 00:13:42.301505 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:42.301590 | orchestrator | 2026-03-27 00:13:42.301607 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-27 00:13:42.342308 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-27 00:13:42.342450 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-27 00:13:42.342468 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-27 00:13:42.342480 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-27 00:13:45.143204 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:45.143363 | orchestrator | 2026-03-27 00:13:45.143372 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-27 00:13:53.736496 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-27 00:13:53.736570 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-27 00:13:53.736583 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-27 00:13:54.243259 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-27 00:13:54.243348 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-27 00:13:54.243364 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-27 00:13:54.243376 | orchestrator | 2026-03-27 00:13:54.243390 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-27 00:13:55.132403 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:55.132448 | orchestrator | 2026-03-27 00:13:55.132454 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-27 00:13:57.975607 | orchestrator | changed: [testbed-manager] 2026-03-27 00:13:57.975670 | orchestrator | 2026-03-27 00:13:57.975680 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-27 00:13:58.003867 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:13:58.003924 | orchestrator | 2026-03-27 00:13:58.003932 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-27 00:15:30.162492 | orchestrator | changed: [testbed-manager] 2026-03-27 00:15:30.162600 | orchestrator | 2026-03-27 00:15:30.162620 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-27 00:15:31.306955 | orchestrator | ok: [testbed-manager] 2026-03-27 00:15:31.307129 | orchestrator | 2026-03-27 00:15:31.307211 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:15:31.307231 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-27 00:15:31.307246 | orchestrator | 2026-03-27 00:15:31.806120 | orchestrator | ok: Runtime: 0:02:14.846468 2026-03-27 00:15:31.825431 | 2026-03-27 00:15:31.825614 | TASK [Reboot manager] 2026-03-27 00:15:33.364507 | orchestrator | ok: Runtime: 0:00:00.942316 2026-03-27 00:15:33.380154 | 2026-03-27 00:15:33.380328 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-27 00:15:47.125468 | orchestrator | ok 2026-03-27 00:15:47.136433 | 2026-03-27 00:15:47.136572 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-27 00:16:47.182371 | orchestrator | ok 2026-03-27 00:16:47.191891 | 2026-03-27 00:16:47.192022 | TASK [Deploy manager + bootstrap nodes] 2026-03-27 00:16:49.491412 | orchestrator | 2026-03-27 00:16:49.491614 | orchestrator | # DEPLOY MANAGER 2026-03-27 00:16:49.491637 | orchestrator | 2026-03-27 00:16:49.491650 | orchestrator | + set -e 2026-03-27 00:16:49.491660 | orchestrator | + echo 2026-03-27 00:16:49.491672 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-27 00:16:49.491686 | orchestrator | + echo 2026-03-27 00:16:49.491727 | orchestrator | + cat /opt/manager-vars.sh 2026-03-27 00:16:49.494044 | orchestrator | export NUMBER_OF_NODES=6 2026-03-27 00:16:49.494066 | orchestrator | 2026-03-27 00:16:49.494077 | orchestrator | export CEPH_VERSION=reef 2026-03-27 00:16:49.494088 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-27 00:16:49.494098 | orchestrator | export MANAGER_VERSION=latest 2026-03-27 00:16:49.494117 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-27 00:16:49.494148 | orchestrator | 2026-03-27 00:16:49.494165 | orchestrator | export ARA=false 2026-03-27 00:16:49.494174 | orchestrator | export DEPLOY_MODE=manager 2026-03-27 00:16:49.494188 | orchestrator | export TEMPEST=true 2026-03-27 00:16:49.494198 | orchestrator | export IS_ZUUL=true 2026-03-27 00:16:49.494207 | orchestrator | 2026-03-27 00:16:49.494221 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 00:16:49.494231 | orchestrator | export EXTERNAL_API=false 2026-03-27 00:16:49.494239 | orchestrator | 2026-03-27 00:16:49.494248 | orchestrator | export IMAGE_USER=ubuntu 2026-03-27 00:16:49.494261 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-27 00:16:49.494269 | orchestrator | 2026-03-27 00:16:49.494278 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-27 00:16:49.494291 | orchestrator | 2026-03-27 00:16:49.494300 | orchestrator | + echo 2026-03-27 00:16:49.494310 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-27 00:16:49.494981 | orchestrator | ++ export INTERACTIVE=false 2026-03-27 00:16:49.494996 | orchestrator | ++ INTERACTIVE=false 2026-03-27 00:16:49.495007 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-27 00:16:49.495018 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-27 00:16:49.495323 | orchestrator | + source /opt/manager-vars.sh 2026-03-27 00:16:49.495349 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-27 00:16:49.495367 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-27 00:16:49.495384 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-27 00:16:49.495400 | orchestrator | ++ CEPH_VERSION=reef 2026-03-27 00:16:49.495415 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-27 00:16:49.495431 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-27 00:16:49.495448 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-27 00:16:49.495464 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-27 00:16:49.495485 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-27 00:16:49.495512 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-27 00:16:49.495529 | orchestrator | ++ export ARA=false 2026-03-27 00:16:49.495544 | orchestrator | ++ ARA=false 2026-03-27 00:16:49.495561 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-27 00:16:49.495576 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-27 00:16:49.495593 | orchestrator | ++ export TEMPEST=true 2026-03-27 00:16:49.495609 | orchestrator | ++ TEMPEST=true 2026-03-27 00:16:49.495626 | orchestrator | ++ export IS_ZUUL=true 2026-03-27 00:16:49.495642 | orchestrator | ++ IS_ZUUL=true 2026-03-27 00:16:49.495657 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 00:16:49.495675 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 00:16:49.495695 | orchestrator | ++ export EXTERNAL_API=false 2026-03-27 00:16:49.495711 | orchestrator | ++ EXTERNAL_API=false 2026-03-27 00:16:49.495726 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-27 00:16:49.495743 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-27 00:16:49.495759 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-27 00:16:49.495775 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-27 00:16:49.495792 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-27 00:16:49.495807 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-27 00:16:49.495823 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-27 00:16:49.552239 | orchestrator | + docker version 2026-03-27 00:16:49.664234 | orchestrator | Client: Docker Engine - Community 2026-03-27 00:16:49.664327 | orchestrator | Version: 27.5.1 2026-03-27 00:16:49.664342 | orchestrator | API version: 1.47 2026-03-27 00:16:49.664356 | orchestrator | Go version: go1.22.11 2026-03-27 00:16:49.664367 | orchestrator | Git commit: 9f9e405 2026-03-27 00:16:49.664378 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-27 00:16:49.664390 | orchestrator | OS/Arch: linux/amd64 2026-03-27 00:16:49.664401 | orchestrator | Context: default 2026-03-27 00:16:49.664412 | orchestrator | 2026-03-27 00:16:49.664423 | orchestrator | Server: Docker Engine - Community 2026-03-27 00:16:49.664435 | orchestrator | Engine: 2026-03-27 00:16:49.664445 | orchestrator | Version: 27.5.1 2026-03-27 00:16:49.664457 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-27 00:16:49.664494 | orchestrator | Go version: go1.22.11 2026-03-27 00:16:49.664506 | orchestrator | Git commit: 4c9b3b0 2026-03-27 00:16:49.664517 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-27 00:16:49.664528 | orchestrator | OS/Arch: linux/amd64 2026-03-27 00:16:49.664538 | orchestrator | Experimental: false 2026-03-27 00:16:49.664549 | orchestrator | containerd: 2026-03-27 00:16:49.664559 | orchestrator | Version: v2.2.2 2026-03-27 00:16:49.664571 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-27 00:16:49.664582 | orchestrator | runc: 2026-03-27 00:16:49.664592 | orchestrator | Version: 1.3.4 2026-03-27 00:16:49.664603 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-27 00:16:49.664614 | orchestrator | docker-init: 2026-03-27 00:16:49.664625 | orchestrator | Version: 0.19.0 2026-03-27 00:16:49.664637 | orchestrator | GitCommit: de40ad0 2026-03-27 00:16:49.666609 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-27 00:16:49.676449 | orchestrator | + set -e 2026-03-27 00:16:49.676516 | orchestrator | + source /opt/manager-vars.sh 2026-03-27 00:16:49.676539 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-27 00:16:49.676559 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-27 00:16:49.676581 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-27 00:16:49.676603 | orchestrator | ++ CEPH_VERSION=reef 2026-03-27 00:16:49.676625 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-27 00:16:49.676646 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-27 00:16:49.676666 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-27 00:16:49.676696 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-27 00:16:49.676715 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-27 00:16:49.676727 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-27 00:16:49.676737 | orchestrator | ++ export ARA=false 2026-03-27 00:16:49.676749 | orchestrator | ++ ARA=false 2026-03-27 00:16:49.676766 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-27 00:16:49.676784 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-27 00:16:49.676802 | orchestrator | ++ export TEMPEST=true 2026-03-27 00:16:49.676820 | orchestrator | ++ TEMPEST=true 2026-03-27 00:16:49.676839 | orchestrator | ++ export IS_ZUUL=true 2026-03-27 00:16:49.676856 | orchestrator | ++ IS_ZUUL=true 2026-03-27 00:16:49.676893 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 00:16:49.676913 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 00:16:49.676932 | orchestrator | ++ export EXTERNAL_API=false 2026-03-27 00:16:49.676952 | orchestrator | ++ EXTERNAL_API=false 2026-03-27 00:16:49.676971 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-27 00:16:49.676982 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-27 00:16:49.676993 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-27 00:16:49.677004 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-27 00:16:49.677015 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-27 00:16:49.677026 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-27 00:16:49.677036 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-27 00:16:49.677047 | orchestrator | ++ export INTERACTIVE=false 2026-03-27 00:16:49.677058 | orchestrator | ++ INTERACTIVE=false 2026-03-27 00:16:49.677068 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-27 00:16:49.677084 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-27 00:16:49.677100 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-27 00:16:49.677111 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-27 00:16:49.677122 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-27 00:16:49.684713 | orchestrator | + set -e 2026-03-27 00:16:49.684795 | orchestrator | + VERSION=reef 2026-03-27 00:16:49.685558 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-27 00:16:49.691574 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-27 00:16:49.691624 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-27 00:16:49.697582 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-27 00:16:49.704386 | orchestrator | + set -e 2026-03-27 00:16:49.704440 | orchestrator | + VERSION=2024.2 2026-03-27 00:16:49.705277 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-27 00:16:49.709092 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-27 00:16:49.709192 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-27 00:16:49.714499 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-27 00:16:49.715449 | orchestrator | ++ semver latest 7.0.0 2026-03-27 00:16:49.772427 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-27 00:16:49.772541 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-27 00:16:49.772560 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-27 00:16:49.773413 | orchestrator | ++ semver latest 10.0.0-0 2026-03-27 00:16:49.836157 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-27 00:16:49.836960 | orchestrator | ++ semver 2024.2 2025.1 2026-03-27 00:16:49.897097 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-27 00:16:49.897248 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-27 00:16:49.974931 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-27 00:16:49.975560 | orchestrator | + source /opt/venv/bin/activate 2026-03-27 00:16:49.976534 | orchestrator | ++ deactivate nondestructive 2026-03-27 00:16:49.976558 | orchestrator | ++ '[' -n '' ']' 2026-03-27 00:16:49.976571 | orchestrator | ++ '[' -n '' ']' 2026-03-27 00:16:49.976582 | orchestrator | ++ hash -r 2026-03-27 00:16:49.976593 | orchestrator | ++ '[' -n '' ']' 2026-03-27 00:16:49.976604 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-27 00:16:49.976620 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-27 00:16:49.976634 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-27 00:16:49.976645 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-27 00:16:49.976656 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-27 00:16:49.976667 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-27 00:16:49.976678 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-27 00:16:49.976694 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-27 00:16:49.976742 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-27 00:16:49.976755 | orchestrator | ++ export PATH 2026-03-27 00:16:49.976766 | orchestrator | ++ '[' -n '' ']' 2026-03-27 00:16:49.976781 | orchestrator | ++ '[' -z '' ']' 2026-03-27 00:16:49.976888 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-27 00:16:49.976904 | orchestrator | ++ PS1='(venv) ' 2026-03-27 00:16:49.976915 | orchestrator | ++ export PS1 2026-03-27 00:16:49.976925 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-27 00:16:49.976937 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-27 00:16:49.976949 | orchestrator | ++ hash -r 2026-03-27 00:16:49.976988 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-27 00:16:51.035863 | orchestrator | 2026-03-27 00:16:51.035996 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-27 00:16:51.036020 | orchestrator | 2026-03-27 00:16:51.036040 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-27 00:16:51.541811 | orchestrator | ok: [testbed-manager] 2026-03-27 00:16:51.541916 | orchestrator | 2026-03-27 00:16:51.541932 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-27 00:16:52.475503 | orchestrator | changed: [testbed-manager] 2026-03-27 00:16:52.475611 | orchestrator | 2026-03-27 00:16:52.475628 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-27 00:16:52.475641 | orchestrator | 2026-03-27 00:16:52.475652 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 00:16:54.764317 | orchestrator | ok: [testbed-manager] 2026-03-27 00:16:54.764403 | orchestrator | 2026-03-27 00:16:54.764412 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-27 00:16:54.809250 | orchestrator | ok: [testbed-manager] 2026-03-27 00:16:54.809354 | orchestrator | 2026-03-27 00:16:54.809372 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-27 00:16:55.238279 | orchestrator | changed: [testbed-manager] 2026-03-27 00:16:55.238399 | orchestrator | 2026-03-27 00:16:55.238429 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-27 00:16:55.275262 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:16:55.275343 | orchestrator | 2026-03-27 00:16:55.275354 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-27 00:16:55.579293 | orchestrator | changed: [testbed-manager] 2026-03-27 00:16:55.579393 | orchestrator | 2026-03-27 00:16:55.579409 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-27 00:16:55.890439 | orchestrator | ok: [testbed-manager] 2026-03-27 00:16:55.890568 | orchestrator | 2026-03-27 00:16:55.890590 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-27 00:16:56.000374 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:16:56.000438 | orchestrator | 2026-03-27 00:16:56.000469 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-27 00:16:56.000475 | orchestrator | 2026-03-27 00:16:56.000479 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 00:16:57.644070 | orchestrator | ok: [testbed-manager] 2026-03-27 00:16:57.644191 | orchestrator | 2026-03-27 00:16:57.644208 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-27 00:16:57.723081 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-27 00:16:57.723208 | orchestrator | 2026-03-27 00:16:57.723223 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-27 00:16:57.767850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-27 00:16:57.767942 | orchestrator | 2026-03-27 00:16:57.767957 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-27 00:16:58.839451 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-27 00:16:58.839559 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-27 00:16:58.839576 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-27 00:16:58.839591 | orchestrator | 2026-03-27 00:16:58.839607 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-27 00:17:00.537440 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-27 00:17:00.537542 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-27 00:17:00.537555 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-27 00:17:00.537566 | orchestrator | 2026-03-27 00:17:00.537577 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-27 00:17:01.121553 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-27 00:17:01.121657 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:01.121675 | orchestrator | 2026-03-27 00:17:01.121688 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-27 00:17:01.714735 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-27 00:17:01.714835 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:01.714852 | orchestrator | 2026-03-27 00:17:01.714865 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-27 00:17:01.768643 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:17:01.768725 | orchestrator | 2026-03-27 00:17:01.768740 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-27 00:17:02.103950 | orchestrator | ok: [testbed-manager] 2026-03-27 00:17:02.104055 | orchestrator | 2026-03-27 00:17:02.104072 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-27 00:17:02.187754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-27 00:17:02.187870 | orchestrator | 2026-03-27 00:17:02.187896 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-27 00:17:03.223529 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:03.223630 | orchestrator | 2026-03-27 00:17:03.223662 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-27 00:17:04.009793 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:04.009881 | orchestrator | 2026-03-27 00:17:04.009900 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-27 00:17:13.411351 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:13.411462 | orchestrator | 2026-03-27 00:17:13.411501 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-27 00:17:13.467188 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:17:13.467282 | orchestrator | 2026-03-27 00:17:13.467297 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-27 00:17:13.467310 | orchestrator | 2026-03-27 00:17:13.467321 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 00:17:15.232609 | orchestrator | ok: [testbed-manager] 2026-03-27 00:17:15.232710 | orchestrator | 2026-03-27 00:17:15.232759 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-27 00:17:15.328509 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-27 00:17:15.328596 | orchestrator | 2026-03-27 00:17:15.328608 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-27 00:17:15.396817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-27 00:17:15.396912 | orchestrator | 2026-03-27 00:17:15.396935 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-27 00:17:17.707705 | orchestrator | ok: [testbed-manager] 2026-03-27 00:17:17.707807 | orchestrator | 2026-03-27 00:17:17.707824 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-27 00:17:17.763410 | orchestrator | ok: [testbed-manager] 2026-03-27 00:17:17.763506 | orchestrator | 2026-03-27 00:17:17.763522 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-27 00:17:17.891381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-27 00:17:17.891481 | orchestrator | 2026-03-27 00:17:17.891500 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-27 00:17:20.512406 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-27 00:17:20.512510 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-27 00:17:20.512525 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-27 00:17:20.512537 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-27 00:17:20.512548 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-27 00:17:20.512559 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-27 00:17:20.512570 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-27 00:17:20.512581 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-27 00:17:20.512592 | orchestrator | 2026-03-27 00:17:20.512605 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-27 00:17:21.115412 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:21.115511 | orchestrator | 2026-03-27 00:17:21.115527 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-27 00:17:21.708327 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:21.708426 | orchestrator | 2026-03-27 00:17:21.708444 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-27 00:17:21.769077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-27 00:17:21.769201 | orchestrator | 2026-03-27 00:17:21.769218 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-27 00:17:22.922297 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-27 00:17:22.922393 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-27 00:17:22.922410 | orchestrator | 2026-03-27 00:17:22.922422 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-27 00:17:23.529950 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:23.530140 | orchestrator | 2026-03-27 00:17:23.530165 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-27 00:17:23.585053 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:17:23.585192 | orchestrator | 2026-03-27 00:17:23.585211 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-27 00:17:23.655067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-27 00:17:23.655172 | orchestrator | 2026-03-27 00:17:23.655188 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-27 00:17:24.284334 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:24.284438 | orchestrator | 2026-03-27 00:17:24.284455 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-27 00:17:24.344930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-27 00:17:24.345031 | orchestrator | 2026-03-27 00:17:24.345042 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-27 00:17:25.684390 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-27 00:17:25.684482 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-27 00:17:25.684497 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:25.684513 | orchestrator | 2026-03-27 00:17:25.684533 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-27 00:17:26.277636 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:26.277736 | orchestrator | 2026-03-27 00:17:26.277753 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-27 00:17:26.326358 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:17:26.326455 | orchestrator | 2026-03-27 00:17:26.326470 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-27 00:17:26.417009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-27 00:17:26.417096 | orchestrator | 2026-03-27 00:17:26.417108 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-27 00:17:26.893045 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:26.893207 | orchestrator | 2026-03-27 00:17:26.893249 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-27 00:17:28.276766 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:28.276873 | orchestrator | 2026-03-27 00:17:28.276889 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-27 00:17:29.477385 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-27 00:17:29.477486 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-27 00:17:29.477501 | orchestrator | 2026-03-27 00:17:29.477514 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-27 00:17:30.070309 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:30.070393 | orchestrator | 2026-03-27 00:17:30.070404 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-27 00:17:30.426255 | orchestrator | ok: [testbed-manager] 2026-03-27 00:17:30.426330 | orchestrator | 2026-03-27 00:17:30.426336 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-27 00:17:30.785962 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:30.786199 | orchestrator | 2026-03-27 00:17:30.786227 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-27 00:17:30.834368 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:17:30.834459 | orchestrator | 2026-03-27 00:17:30.834475 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-27 00:17:30.904069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-27 00:17:30.904200 | orchestrator | 2026-03-27 00:17:30.904219 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-27 00:17:30.943898 | orchestrator | ok: [testbed-manager] 2026-03-27 00:17:30.943966 | orchestrator | 2026-03-27 00:17:30.943973 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-27 00:17:32.900447 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-27 00:17:32.900550 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-27 00:17:32.900565 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-27 00:17:32.900578 | orchestrator | 2026-03-27 00:17:32.900590 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-27 00:17:33.591633 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:33.591740 | orchestrator | 2026-03-27 00:17:33.591767 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-27 00:17:34.299784 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:34.299876 | orchestrator | 2026-03-27 00:17:34.299891 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-27 00:17:35.026543 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:35.026592 | orchestrator | 2026-03-27 00:17:35.026609 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-27 00:17:35.093181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-27 00:17:35.093235 | orchestrator | 2026-03-27 00:17:35.093248 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-27 00:17:35.138549 | orchestrator | ok: [testbed-manager] 2026-03-27 00:17:35.138628 | orchestrator | 2026-03-27 00:17:35.138638 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-27 00:17:35.833000 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-27 00:17:35.833133 | orchestrator | 2026-03-27 00:17:35.833159 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-27 00:17:35.917611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-27 00:17:35.917708 | orchestrator | 2026-03-27 00:17:35.917724 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-27 00:17:36.612813 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:36.612918 | orchestrator | 2026-03-27 00:17:36.612942 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-27 00:17:37.223660 | orchestrator | ok: [testbed-manager] 2026-03-27 00:17:37.223760 | orchestrator | 2026-03-27 00:17:37.223785 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-27 00:17:37.281972 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:17:37.282143 | orchestrator | 2026-03-27 00:17:37.282163 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-27 00:17:37.340804 | orchestrator | ok: [testbed-manager] 2026-03-27 00:17:37.340885 | orchestrator | 2026-03-27 00:17:37.340896 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-27 00:17:38.149431 | orchestrator | changed: [testbed-manager] 2026-03-27 00:17:38.149524 | orchestrator | 2026-03-27 00:17:38.149545 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-27 00:18:42.049826 | orchestrator | changed: [testbed-manager] 2026-03-27 00:18:42.049939 | orchestrator | 2026-03-27 00:18:42.049958 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-27 00:18:44.008935 | orchestrator | ok: [testbed-manager] 2026-03-27 00:18:44.009054 | orchestrator | 2026-03-27 00:18:44.009121 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-27 00:18:44.067713 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:18:44.067800 | orchestrator | 2026-03-27 00:18:44.067814 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-27 00:18:46.430568 | orchestrator | changed: [testbed-manager] 2026-03-27 00:18:46.430664 | orchestrator | 2026-03-27 00:18:46.430680 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-27 00:18:46.521492 | orchestrator | ok: [testbed-manager] 2026-03-27 00:18:46.521578 | orchestrator | 2026-03-27 00:18:46.521612 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-27 00:18:46.521623 | orchestrator | 2026-03-27 00:18:46.521633 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-27 00:18:46.572810 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:18:46.572887 | orchestrator | 2026-03-27 00:18:46.572897 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-27 00:19:46.623750 | orchestrator | Pausing for 60 seconds 2026-03-27 00:19:46.623861 | orchestrator | changed: [testbed-manager] 2026-03-27 00:19:46.623876 | orchestrator | 2026-03-27 00:19:46.623890 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-27 00:19:49.534134 | orchestrator | changed: [testbed-manager] 2026-03-27 00:19:49.534241 | orchestrator | 2026-03-27 00:19:49.534258 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-27 00:20:30.934631 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-27 00:20:30.934751 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-27 00:20:30.934767 | orchestrator | changed: [testbed-manager] 2026-03-27 00:20:30.934810 | orchestrator | 2026-03-27 00:20:30.934823 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-27 00:20:35.907575 | orchestrator | changed: [testbed-manager] 2026-03-27 00:20:35.907676 | orchestrator | 2026-03-27 00:20:35.907691 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-27 00:20:35.979612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-27 00:20:35.979708 | orchestrator | 2026-03-27 00:20:35.979722 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-27 00:20:35.979733 | orchestrator | 2026-03-27 00:20:35.979743 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-27 00:20:36.033259 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:20:36.033333 | orchestrator | 2026-03-27 00:20:36.033346 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-27 00:20:36.092699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-27 00:20:36.092786 | orchestrator | 2026-03-27 00:20:36.092801 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-27 00:20:36.780508 | orchestrator | changed: [testbed-manager] 2026-03-27 00:20:36.780611 | orchestrator | 2026-03-27 00:20:36.780629 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-27 00:20:39.887978 | orchestrator | ok: [testbed-manager] 2026-03-27 00:20:39.888144 | orchestrator | 2026-03-27 00:20:39.888163 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-27 00:20:39.963145 | orchestrator | ok: [testbed-manager] => { 2026-03-27 00:20:39.963258 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-27 00:20:39.963281 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-27 00:20:39.963300 | orchestrator | "Checking running containers against expected versions...", 2026-03-27 00:20:39.963319 | orchestrator | "", 2026-03-27 00:20:39.963340 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-27 00:20:39.963359 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-27 00:20:39.963376 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.963394 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-27 00:20:39.963410 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.963425 | orchestrator | "", 2026-03-27 00:20:39.963441 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-27 00:20:39.963457 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-27 00:20:39.963473 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.963489 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-27 00:20:39.963505 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.963522 | orchestrator | "", 2026-03-27 00:20:39.963538 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-27 00:20:39.963555 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-27 00:20:39.963570 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.963586 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-27 00:20:39.963601 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.963619 | orchestrator | "", 2026-03-27 00:20:39.963635 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-27 00:20:39.963651 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-27 00:20:39.963669 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.963686 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-27 00:20:39.963704 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.963721 | orchestrator | "", 2026-03-27 00:20:39.963738 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-27 00:20:39.963754 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-27 00:20:39.963804 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.963822 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-27 00:20:39.963837 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.963853 | orchestrator | "", 2026-03-27 00:20:39.963868 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-27 00:20:39.963885 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.963901 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.963917 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.963934 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.963951 | orchestrator | "", 2026-03-27 00:20:39.963968 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-27 00:20:39.963984 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-27 00:20:39.964001 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.964068 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-27 00:20:39.964085 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.964101 | orchestrator | "", 2026-03-27 00:20:39.964116 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-27 00:20:39.964132 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-27 00:20:39.964147 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.964163 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-27 00:20:39.964180 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.964195 | orchestrator | "", 2026-03-27 00:20:39.964224 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-27 00:20:39.964275 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-27 00:20:39.964299 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.964316 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-27 00:20:39.964333 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.964350 | orchestrator | "", 2026-03-27 00:20:39.964368 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-27 00:20:39.964384 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-27 00:20:39.964401 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.964418 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-27 00:20:39.964435 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.964453 | orchestrator | "", 2026-03-27 00:20:39.964470 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-27 00:20:39.964486 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964504 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.964521 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964537 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.964554 | orchestrator | "", 2026-03-27 00:20:39.964572 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-27 00:20:39.964589 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964605 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.964621 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964634 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.964644 | orchestrator | "", 2026-03-27 00:20:39.964653 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-27 00:20:39.964663 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964672 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.964682 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964691 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.964701 | orchestrator | "", 2026-03-27 00:20:39.964710 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-27 00:20:39.964720 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964729 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.964739 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964763 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.964773 | orchestrator | "", 2026-03-27 00:20:39.964782 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-27 00:20:39.964817 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964827 | orchestrator | " Enabled: true", 2026-03-27 00:20:39.964837 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-27 00:20:39.964846 | orchestrator | " Status: ✅ MATCH", 2026-03-27 00:20:39.964855 | orchestrator | "", 2026-03-27 00:20:39.964867 | orchestrator | "=== Summary ===", 2026-03-27 00:20:39.964884 | orchestrator | "Errors (version mismatches): 0", 2026-03-27 00:20:39.964900 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-27 00:20:39.964917 | orchestrator | "", 2026-03-27 00:20:39.964933 | orchestrator | "✅ All running containers match expected versions!" 2026-03-27 00:20:39.964950 | orchestrator | ] 2026-03-27 00:20:39.964966 | orchestrator | } 2026-03-27 00:20:39.964982 | orchestrator | 2026-03-27 00:20:39.965000 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-27 00:20:40.023316 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:20:40.023404 | orchestrator | 2026-03-27 00:20:40.023416 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:20:40.023431 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-27 00:20:40.023445 | orchestrator | 2026-03-27 00:20:40.116472 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-27 00:20:40.116561 | orchestrator | + deactivate 2026-03-27 00:20:40.116573 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-27 00:20:40.116586 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-27 00:20:40.116595 | orchestrator | + export PATH 2026-03-27 00:20:40.116604 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-27 00:20:40.116614 | orchestrator | + '[' -n '' ']' 2026-03-27 00:20:40.116623 | orchestrator | + hash -r 2026-03-27 00:20:40.116631 | orchestrator | + '[' -n '' ']' 2026-03-27 00:20:40.116640 | orchestrator | + unset VIRTUAL_ENV 2026-03-27 00:20:40.116648 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-27 00:20:40.116657 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-27 00:20:40.116665 | orchestrator | + unset -f deactivate 2026-03-27 00:20:40.116674 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-27 00:20:40.125961 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-27 00:20:40.126125 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-27 00:20:40.126144 | orchestrator | + local max_attempts=60 2026-03-27 00:20:40.126158 | orchestrator | + local name=ceph-ansible 2026-03-27 00:20:40.126170 | orchestrator | + local attempt_num=1 2026-03-27 00:20:40.126975 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:20:40.163805 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:20:40.163887 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-27 00:20:40.163902 | orchestrator | + local max_attempts=60 2026-03-27 00:20:40.163913 | orchestrator | + local name=kolla-ansible 2026-03-27 00:20:40.163925 | orchestrator | + local attempt_num=1 2026-03-27 00:20:40.164620 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-27 00:20:40.200342 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:20:40.200427 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-27 00:20:40.200441 | orchestrator | + local max_attempts=60 2026-03-27 00:20:40.200454 | orchestrator | + local name=osism-ansible 2026-03-27 00:20:40.200465 | orchestrator | + local attempt_num=1 2026-03-27 00:20:40.200476 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-27 00:20:40.234387 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:20:40.234466 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-27 00:20:40.234480 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-27 00:20:40.890488 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-27 00:20:41.058420 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-27 00:20:41.058563 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-27 00:20:41.058581 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-27 00:20:41.058629 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-27 00:20:41.058643 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-27 00:20:41.058653 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-27 00:20:41.058663 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-27 00:20:41.058673 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2026-03-27 00:20:41.058699 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-27 00:20:41.058709 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-27 00:20:41.058719 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-27 00:20:41.058728 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-27 00:20:41.058737 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-27 00:20:41.058747 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-27 00:20:41.058757 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-27 00:20:41.058766 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-27 00:20:41.061928 | orchestrator | ++ semver latest 7.0.0 2026-03-27 00:20:41.101711 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-27 00:20:41.101806 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-27 00:20:41.101823 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-27 00:20:41.104855 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-27 00:20:53.304765 | orchestrator | 2026-03-27 00:20:53 | INFO  | Prepare task for execution of resolvconf. 2026-03-27 00:20:53.483772 | orchestrator | 2026-03-27 00:20:53 | INFO  | Task 6f29831e-8a48-4d79-a7e6-702c9d61cfe2 (resolvconf) was prepared for execution. 2026-03-27 00:20:53.483923 | orchestrator | 2026-03-27 00:20:53 | INFO  | It takes a moment until task 6f29831e-8a48-4d79-a7e6-702c9d61cfe2 (resolvconf) has been started and output is visible here. 2026-03-27 00:21:05.768428 | orchestrator | 2026-03-27 00:21:05.768553 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-27 00:21:05.768571 | orchestrator | 2026-03-27 00:21:05.768584 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 00:21:05.768596 | orchestrator | Friday 27 March 2026 00:20:56 +0000 (0:00:00.179) 0:00:00.179 ********** 2026-03-27 00:21:05.768607 | orchestrator | ok: [testbed-manager] 2026-03-27 00:21:05.768619 | orchestrator | 2026-03-27 00:21:05.768630 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-27 00:21:05.768642 | orchestrator | Friday 27 March 2026 00:20:59 +0000 (0:00:03.390) 0:00:03.569 ********** 2026-03-27 00:21:05.768653 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:21:05.768664 | orchestrator | 2026-03-27 00:21:05.768675 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-27 00:21:05.768686 | orchestrator | Friday 27 March 2026 00:20:59 +0000 (0:00:00.058) 0:00:03.628 ********** 2026-03-27 00:21:05.768697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-27 00:21:05.768709 | orchestrator | 2026-03-27 00:21:05.768720 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-27 00:21:05.768731 | orchestrator | Friday 27 March 2026 00:21:00 +0000 (0:00:00.067) 0:00:03.695 ********** 2026-03-27 00:21:05.768753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-27 00:21:05.768765 | orchestrator | 2026-03-27 00:21:05.768775 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-27 00:21:05.768786 | orchestrator | Friday 27 March 2026 00:21:00 +0000 (0:00:00.071) 0:00:03.766 ********** 2026-03-27 00:21:05.768796 | orchestrator | ok: [testbed-manager] 2026-03-27 00:21:05.768807 | orchestrator | 2026-03-27 00:21:05.768818 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-27 00:21:05.768829 | orchestrator | Friday 27 March 2026 00:21:01 +0000 (0:00:01.093) 0:00:04.861 ********** 2026-03-27 00:21:05.768840 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:21:05.768850 | orchestrator | 2026-03-27 00:21:05.768861 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-27 00:21:05.768872 | orchestrator | Friday 27 March 2026 00:21:01 +0000 (0:00:00.052) 0:00:04.913 ********** 2026-03-27 00:21:05.768882 | orchestrator | ok: [testbed-manager] 2026-03-27 00:21:05.768893 | orchestrator | 2026-03-27 00:21:05.768904 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-27 00:21:05.768914 | orchestrator | Friday 27 March 2026 00:21:01 +0000 (0:00:00.512) 0:00:05.425 ********** 2026-03-27 00:21:05.768925 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:21:05.768936 | orchestrator | 2026-03-27 00:21:05.768946 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-27 00:21:05.768958 | orchestrator | Friday 27 March 2026 00:21:01 +0000 (0:00:00.078) 0:00:05.504 ********** 2026-03-27 00:21:05.768968 | orchestrator | changed: [testbed-manager] 2026-03-27 00:21:05.768979 | orchestrator | 2026-03-27 00:21:05.768990 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-27 00:21:05.769029 | orchestrator | Friday 27 March 2026 00:21:02 +0000 (0:00:00.564) 0:00:06.068 ********** 2026-03-27 00:21:05.769043 | orchestrator | changed: [testbed-manager] 2026-03-27 00:21:05.769054 | orchestrator | 2026-03-27 00:21:05.769065 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-27 00:21:05.769075 | orchestrator | Friday 27 March 2026 00:21:03 +0000 (0:00:01.050) 0:00:07.119 ********** 2026-03-27 00:21:05.769086 | orchestrator | ok: [testbed-manager] 2026-03-27 00:21:05.769097 | orchestrator | 2026-03-27 00:21:05.769130 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-27 00:21:05.769142 | orchestrator | Friday 27 March 2026 00:21:04 +0000 (0:00:00.945) 0:00:08.065 ********** 2026-03-27 00:21:05.769153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-27 00:21:05.769163 | orchestrator | 2026-03-27 00:21:05.769174 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-27 00:21:05.769184 | orchestrator | Friday 27 March 2026 00:21:04 +0000 (0:00:00.058) 0:00:08.123 ********** 2026-03-27 00:21:05.769195 | orchestrator | changed: [testbed-manager] 2026-03-27 00:21:05.769205 | orchestrator | 2026-03-27 00:21:05.769216 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:21:05.769228 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 00:21:05.769239 | orchestrator | 2026-03-27 00:21:05.769249 | orchestrator | 2026-03-27 00:21:05.769260 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:21:05.769271 | orchestrator | Friday 27 March 2026 00:21:05 +0000 (0:00:01.121) 0:00:09.245 ********** 2026-03-27 00:21:05.769282 | orchestrator | =============================================================================== 2026-03-27 00:21:05.769292 | orchestrator | Gathering Facts --------------------------------------------------------- 3.39s 2026-03-27 00:21:05.769303 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2026-03-27 00:21:05.769313 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.09s 2026-03-27 00:21:05.769324 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2026-03-27 00:21:05.769335 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2026-03-27 00:21:05.769345 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-03-27 00:21:05.769374 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-03-27 00:21:05.769385 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-27 00:21:05.769396 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-27 00:21:05.769406 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-27 00:21:05.769417 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.06s 2026-03-27 00:21:05.769427 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-27 00:21:05.769438 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-03-27 00:21:05.941464 | orchestrator | + osism apply sshconfig 2026-03-27 00:21:17.164474 | orchestrator | 2026-03-27 00:21:17 | INFO  | Prepare task for execution of sshconfig. 2026-03-27 00:21:17.235691 | orchestrator | 2026-03-27 00:21:17 | INFO  | Task 03a03ab6-466f-43e0-abbb-a2d21f16df6f (sshconfig) was prepared for execution. 2026-03-27 00:21:17.235808 | orchestrator | 2026-03-27 00:21:17 | INFO  | It takes a moment until task 03a03ab6-466f-43e0-abbb-a2d21f16df6f (sshconfig) has been started and output is visible here. 2026-03-27 00:21:27.925887 | orchestrator | 2026-03-27 00:21:27.926110 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-27 00:21:27.926148 | orchestrator | 2026-03-27 00:21:27.926167 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-27 00:21:27.926179 | orchestrator | Friday 27 March 2026 00:21:20 +0000 (0:00:00.180) 0:00:00.180 ********** 2026-03-27 00:21:27.926190 | orchestrator | ok: [testbed-manager] 2026-03-27 00:21:27.926202 | orchestrator | 2026-03-27 00:21:27.926213 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-27 00:21:27.926224 | orchestrator | Friday 27 March 2026 00:21:21 +0000 (0:00:00.909) 0:00:01.090 ********** 2026-03-27 00:21:27.926265 | orchestrator | changed: [testbed-manager] 2026-03-27 00:21:27.926285 | orchestrator | 2026-03-27 00:21:27.926312 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-27 00:21:27.926334 | orchestrator | Friday 27 March 2026 00:21:21 +0000 (0:00:00.549) 0:00:01.640 ********** 2026-03-27 00:21:27.926352 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-27 00:21:27.926370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-27 00:21:27.926387 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-27 00:21:27.926405 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-27 00:21:27.926423 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-27 00:21:27.926439 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-27 00:21:27.926456 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-27 00:21:27.926474 | orchestrator | 2026-03-27 00:21:27.926494 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-27 00:21:27.926513 | orchestrator | Friday 27 March 2026 00:21:27 +0000 (0:00:05.450) 0:00:07.091 ********** 2026-03-27 00:21:27.926532 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:21:27.926552 | orchestrator | 2026-03-27 00:21:27.926571 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-27 00:21:27.926589 | orchestrator | Friday 27 March 2026 00:21:27 +0000 (0:00:00.104) 0:00:07.195 ********** 2026-03-27 00:21:27.926602 | orchestrator | changed: [testbed-manager] 2026-03-27 00:21:27.926614 | orchestrator | 2026-03-27 00:21:27.926628 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:21:27.926641 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:21:27.926654 | orchestrator | 2026-03-27 00:21:27.926666 | orchestrator | 2026-03-27 00:21:27.926679 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:21:27.926692 | orchestrator | Friday 27 March 2026 00:21:27 +0000 (0:00:00.507) 0:00:07.703 ********** 2026-03-27 00:21:27.926704 | orchestrator | =============================================================================== 2026-03-27 00:21:27.926716 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.45s 2026-03-27 00:21:27.926728 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.91s 2026-03-27 00:21:27.926740 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-03-27 00:21:27.926752 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2026-03-27 00:21:27.926764 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-03-27 00:21:28.049629 | orchestrator | + osism apply known-hosts 2026-03-27 00:21:39.272154 | orchestrator | 2026-03-27 00:21:39 | INFO  | Prepare task for execution of known-hosts. 2026-03-27 00:21:39.344054 | orchestrator | 2026-03-27 00:21:39 | INFO  | Task 0490562f-bde3-489d-892e-ea024b8d58d8 (known-hosts) was prepared for execution. 2026-03-27 00:21:39.344142 | orchestrator | 2026-03-27 00:21:39 | INFO  | It takes a moment until task 0490562f-bde3-489d-892e-ea024b8d58d8 (known-hosts) has been started and output is visible here. 2026-03-27 00:21:54.452102 | orchestrator | 2026-03-27 00:21:54.452204 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-27 00:21:54.452218 | orchestrator | 2026-03-27 00:21:54.452229 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-27 00:21:54.452240 | orchestrator | Friday 27 March 2026 00:21:42 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-03-27 00:21:54.452250 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-27 00:21:54.452261 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-27 00:21:54.452271 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-27 00:21:54.452299 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-27 00:21:54.452309 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-27 00:21:54.452319 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-27 00:21:54.452328 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-27 00:21:54.452338 | orchestrator | 2026-03-27 00:21:54.452347 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-27 00:21:54.452358 | orchestrator | Friday 27 March 2026 00:21:48 +0000 (0:00:06.187) 0:00:06.359 ********** 2026-03-27 00:21:54.452377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-27 00:21:54.452390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-27 00:21:54.452400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-27 00:21:54.452410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-27 00:21:54.452419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-27 00:21:54.452429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-27 00:21:54.452439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-27 00:21:54.452448 | orchestrator | 2026-03-27 00:21:54.452458 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:21:54.452468 | orchestrator | Friday 27 March 2026 00:21:48 +0000 (0:00:00.198) 0:00:06.558 ********** 2026-03-27 00:21:54.452478 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIAP23WG24CBEYgUNOtbl86FIIFv8J8mOc1Dp9/T55a+f9yjeZQR0e6q9GciNnXLh6xMYqRKp5YIXsb5cPQzAeQ=) 2026-03-27 00:21:54.452492 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcT/xyY3U0XxhnKmbqTHTlkpchlWrE5PBcLLOkJTuMHFsy3UcjM+cgljRwArClrwuCGU/FBKTlyXdgKiyrLUnabjoSBWCrEcXGYFkCJRewc7Ea4zwYcU9M61Y43Y1E+lmzQYMGEF/pOWQVapEkugAGBu7yC76nWo6nuEE9x8RmGTpk6qBSn8+6S8JoEKn++i7wX+X3M6gdRK2qB3IJXJmhHEpPF6Z+9AUObexWDkPNa8BvicvgELEoUfoZqzf4KNgIyoaC5A8MogOJntEVZOVZjkWr/BWEFabUqvkdHCzKymmki3/IEn4ap4n5QhrYx6AtlI0FBvCBr0WgHwEmhFKBESC/I481cn3aZuQCQdGNgvM4k/WCdbhb9uTuNR2bMtWEtep4zz+RKlpHO5RrdPlQgHoS8DwfIGHvjJHcKA5+qd4rGC2/UETEa55g9fO+UdVlXrtvLH3r7+Si6mQcgGwTQuTLnv+K97ejiFa3G/NA3fneSQ6aHNgNBXjl//HGfWk=) 2026-03-27 00:21:54.452505 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHRIo2GIE74Kvfx6l1/+yB9I5N+TGpQ/W4emqn9GHe05) 2026-03-27 00:21:54.452517 | orchestrator | 2026-03-27 00:21:54.452526 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:21:54.452536 | orchestrator | Friday 27 March 2026 00:21:49 +0000 (0:00:01.263) 0:00:07.821 ********** 2026-03-27 00:21:54.452547 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKUhPSK+n4MxZENf9RAkQ1/cv7GbzygtI23SW5KH7agKQmPbsfcnOWOVD87jhVwvmufXh0aQFrynrRIMmQL8uAw=) 2026-03-27 00:21:54.452559 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGyvJ3jM6S3YKbCm8HMZU25f6KTdBZIMV9pph/aTtFzE) 2026-03-27 00:21:54.452601 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDINvQ+1dg1+P8ejv0EQaqPLXUQAVUxaWPnDBMN5WWUpARzzNsoEntwzXvkaPXrdqoXYRRe5tj5A0kn2xFc8iG+f5X3W+jUHozLGkGisu8CjGqXiWkrHRmjIU/inTLP/nhi3AjJm+RK4sBLnTH64s1ObiPPv+ecXq6rL610qjAHbNh7YP6NMA0R5+BJvT57dLOA0fs+kbt6WzUrKZx55tOCEwqDxRxzctxRisHDS6slytUj5OBt20+tu3Jmz6ZA8Q3YgX6TS95fqTqLBN2VGGFgBCgvu8p/u2r8aMRG++UsVJ8pmWhLxrBTYgLOm+tgqKJw1ehjk/ijvRwdqpkU4dnLJ5+5UTy7dGh6mq0n1WYW4nqVrmk8enemYj/cpMcTbnAQip6Iux/0+brvNax9u6B0DhbYEKbOv+IgDCpKTTPIGjxoQ9tr+b6G73NCt/OqizWRCHN5YJkkdNkJpYxk/30eeS+58GRMFV5oeHVzcRxqZRJISy1q6tcG+hSEoKwmS6s=) 2026-03-27 00:21:54.452618 | orchestrator | 2026-03-27 00:21:54.452629 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:21:54.452640 | orchestrator | Friday 27 March 2026 00:21:50 +0000 (0:00:01.039) 0:00:08.861 ********** 2026-03-27 00:21:54.452651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIQLCAfPk6qXNdC/BlN8Pj5QPozODUElU0EZQf0utSDSm1dKeOT9or84w39NSpWWHHggRgLxAw5VnmQbBq18nNs=) 2026-03-27 00:21:54.452662 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHdopSii2BYyWjvOG4NLg0PVJwzvw363dpvelYlaaNzR) 2026-03-27 00:21:54.452733 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkd9NuarOkue9sCUO0bkiUADmXgcvmt5kI+VWiXUReQkM/XS2/SYJifFcdi8huviqGhHZXXt83ScU3R2ODGJFg+TUjz4XbEGCTJ5ZR87So1TIxzMragWZ1gXev/KXx+zwgVLu1D+TgGRglXnIgBIzLyRMdfQL7tjgwsCqjAT2pT8BRDU/53mwZoHSHWc0/qnLVcSBqUT5KY2JgGZEhbwaXTysn04vZ5z39G56mPnHy/hbYgRWVhHKyDDYJ/Ebkr8nRTMuGsQlF99tguLZQSh1MaIGFRcj76buuJB6RwQrFe8NDI9HEGS3NuvauqO7OE3S68wZYdpfS/CCpmeeamkX7yVQDCpxXANsEv7AWdQd02v2P6sTUHI67j3DpNS3/9Aox15XY1C78KvreTI9MkxUSeyv6G7ozDLzMr+dNwNYdgAh0MRV6uOC4biVfUI4/y0sKdMGDusIXuyFcjKFFhbff5PZ5TZvPcDYr0G5lGX666xjFJAsL5LdPFaqscac+qhs=) 2026-03-27 00:21:54.452745 | orchestrator | 2026-03-27 00:21:54.452756 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:21:54.452767 | orchestrator | Friday 27 March 2026 00:21:51 +0000 (0:00:01.026) 0:00:09.887 ********** 2026-03-27 00:21:54.452782 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOczQhuzp9Z5PaugXe3guSZMJW+gRDVjGcanRnkPIeMHcKwE9SStPO5s7qR0RC/A5xjNKO2Pfo5SJ0/tv4xzwbc=) 2026-03-27 00:21:54.452794 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5UBLja7/1KGBB7T6wN1J+T8jpSEN8I+c4txeSbUOaz) 2026-03-27 00:21:54.452806 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsG48ptFs30GE3glhRYzRGi8aEAxUb2eQyZBowsPZl9oTuq4yz9x6C5Kro9LO52dASKTZTevXVBXBbMoVwsXQfuwTMkYwssFM3Gww2b41Ipq+mKpctqTWPQFmMgD0c1wpycrA72tACPfKcmUl0A6hhGsFH0mUunt72K9ht/+VKYOydnHG2S9o9jIT53Fbq75G7w8sbc0hSS+DE76L0i50VI6tSnJBozoGEWKEjTIzxMOWx3lksGPugbBcA4iQINouGBw5HQY8lOUUN0aR1GOKGbwUFPOS9q4FODqej+9jpFED1Q1a9UjwwppF5DZ5eNZ+HhIXGynZxX+mq9L4m7aaY4rvMkHGCk880dgWFEJ9/DEhY/ANfL4+4NpHEl8NJYaLx7Rz46S0rOcPMfY/W0QEKjSVhy2xcdL+MoHlFYAPMDNBb0hzoya4m+hYvvItLNtYYJjGF2vbsrMD7sx6cc0kubIgoGB/72cpQ7YimWUHBogMZNrUbZRhfIOrU+DbACRE=) 2026-03-27 00:21:54.452817 | orchestrator | 2026-03-27 00:21:54.452828 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:21:54.452839 | orchestrator | Friday 27 March 2026 00:21:53 +0000 (0:00:01.035) 0:00:10.923 ********** 2026-03-27 00:21:54.452850 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKog2bWkxJcRkcodK0aolNv0UaSa35onpXa/iiO4QOeA) 2026-03-27 00:21:54.452861 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDV0TeqcpSvFor4L3Oy905TT8spQeEdMFf1ka+Yzp9j6wt4lGWsclGxg84BpePc/tjGT6ri7oGjsSOIx8ZRzYGR1LIgpSn6nZRV2SJNrhzRfmS5neh4AmGVTUPtQrD7cSuVxpArw4YbfyU+0R5W3zT1QtCVnO0s4XIyFlXjVMtVODqF//U8lgFAlNdo5AIrPt3cCNyv3NKdKLO1g/jqoJYU3oOkgtXg5nvlZpydKCpwhoO9IizxYMKMQndpZtzudBH6Ro7E/6vtOnideNVqyMzHZqdD3BXCbH4rP4EdoCOfRhH5yfLMCWBD9nm/0znbmlNI+4rMhl/4e0KYZYD+P8eJpb0cLXqxxX0Vu9yUn0+y0PNxv50WSfR7g3SFtVImp/Dnn66pZeHBRjbqhJIzpKGD/L7BlUt14iZ5SZJNBouhh6xelv3DdYwdJxUyuNVfSl5rBIftK6RT7L12vNOPHt9nXCuYLzcHAwAzUSmQfNVxEo1Dl500KAieF18Vk4qjivc=) 2026-03-27 00:21:54.452878 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCkYmiqLIH9Lbs6K4u23iOOiRVcv5oB8zd1zW7TNF02pxh7/O5/9kJlbqgnbw501dv1e8NSlj7RZT5IpCoKa5pw=) 2026-03-27 00:21:54.452906 | orchestrator | 2026-03-27 00:21:54.452918 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:21:54.452938 | orchestrator | Friday 27 March 2026 00:21:54 +0000 (0:00:01.053) 0:00:11.976 ********** 2026-03-27 00:21:54.452955 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP6seNez+C3PK1uujEzyYz5EpcsLWU5I5/5l8RAx5ROLdrs00ji1zF8fTWIr/4pQyzAtoUVYARr5LQz1c28CY7I=) 2026-03-27 00:22:05.250181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4njvg+/0sQAVAqu5FyxHXTA5TxJfWZ5gIl8praf2tzubqUTdWeAPqJ1Sb+UofdaId0aSpx6YUS6hdhmTL+8xWsvoLZNnPJe/5o9768uJiD6NHN6njlA6fmlAo92aoGZyBcES9HZk2DXqPXGCoOrUL7Ox2CNq9rpuNUzslxfCaevalfxfFSzRNMX/EFNhNHFn4hdmnZQ7/2aEidFhFer7DxiUJsv483KZNh21WZtubLIZAPOoJxYRFs5hkevok8SrjNSXOBDyxl1ZUNi4gnNdYqzXQMThotVvksXEizeSt+kD6W+D/gNcNaGTgEH3sRCHF+/zcDH4moxd6VEYZeo5vnYDvl1ZKw2H/0PHmxIwab9l8gwmBLAOdVgLMzWozVyVoZsCepbV5Z1mO6AVN/yo9esMFkSDT5gTnI/VbSYsJgh76FcaJ/yuoWbK5rdM19OU/YsNMRsF+IsB47vDd6JLiv+sLTj/+PKCrA9IhVJETQS7Tt6/7PdazUfTXhrVJNL0=) 2026-03-27 00:22:05.250316 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF3++A+Al7hIKIuUHEQ0mGoc6xMO4LdhJQKpWTUOYApb) 2026-03-27 00:22:05.250336 | orchestrator | 2026-03-27 00:22:05.250349 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:22:05.250362 | orchestrator | Friday 27 March 2026 00:21:55 +0000 (0:00:01.100) 0:00:13.077 ********** 2026-03-27 00:22:05.250373 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDvH49G7kc+QLU+YECJFrbykrG2+m2ucJkAASae5/uX/R/I9yMoYp65yS4MMK2h6VpNoqTAEN58rAfukfHSSE0wUcN0oK8tvBILpCGEhpm18nIAnfP7dzK6f1csia2WPXYq6zCOCRh3ThB8+rBkF2hhjhSYGorwmsDZLVBUtCvndtuBsmvndqyaL1YGJMA6lMNmDvaK+acrpsoiwzYFanFpICUhAV88/BYyfL4tJLxZcnmrMDOg1Btq0a31/B8aD3LB9kXcq2L4Dq5psCvfTpMR4sMKNvKI58pKjpyoMLa4MsVMPQgLE9EAZ2hkPN4Q7nXbG7UkW7levp2B0oeBcLddOCJwPrbZcUS3I82dR2mLgPBWPILp6dNOaTh6YiYYYCsuZ3e/C9sh8uQXySC+8vb7Zn0tpLTlOeIK6BtT7SVciaej6DXEUgf7ifl/nd9QpFJ9o1beDD9FbEMqab28o/DeqPdqWE3DYY1eGpYES4k/3Um/xbZsirIcKFv7yYaEnz8=) 2026-03-27 00:22:05.250386 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAqlUsSLb/Tcx+Han0dYWuJJadPirf2HUQRY3g428c36BT5krslMcArA9ymNN9nxJTtNiTGTyiz5YUtusqldmhk=) 2026-03-27 00:22:05.250399 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJSrnJTTijKOYcLQ2ilEKhx/n2cU4nY4UwfrJ2lsafmB) 2026-03-27 00:22:05.250424 | orchestrator | 2026-03-27 00:22:05.251284 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-27 00:22:05.251312 | orchestrator | Friday 27 March 2026 00:21:56 +0000 (0:00:01.029) 0:00:14.107 ********** 2026-03-27 00:22:05.251324 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-27 00:22:05.251336 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-27 00:22:05.251347 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-27 00:22:05.251358 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-27 00:22:05.251370 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-27 00:22:05.251401 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-27 00:22:05.251413 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-27 00:22:05.251449 | orchestrator | 2026-03-27 00:22:05.251461 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-27 00:22:05.251472 | orchestrator | Friday 27 March 2026 00:22:01 +0000 (0:00:05.219) 0:00:19.327 ********** 2026-03-27 00:22:05.251484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-27 00:22:05.251497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-27 00:22:05.251508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-27 00:22:05.251519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-27 00:22:05.251530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-27 00:22:05.251540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-27 00:22:05.251551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-27 00:22:05.251562 | orchestrator | 2026-03-27 00:22:05.251594 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:22:05.251605 | orchestrator | Friday 27 March 2026 00:22:01 +0000 (0:00:00.161) 0:00:19.489 ********** 2026-03-27 00:22:05.251616 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHRIo2GIE74Kvfx6l1/+yB9I5N+TGpQ/W4emqn9GHe05) 2026-03-27 00:22:05.251630 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcT/xyY3U0XxhnKmbqTHTlkpchlWrE5PBcLLOkJTuMHFsy3UcjM+cgljRwArClrwuCGU/FBKTlyXdgKiyrLUnabjoSBWCrEcXGYFkCJRewc7Ea4zwYcU9M61Y43Y1E+lmzQYMGEF/pOWQVapEkugAGBu7yC76nWo6nuEE9x8RmGTpk6qBSn8+6S8JoEKn++i7wX+X3M6gdRK2qB3IJXJmhHEpPF6Z+9AUObexWDkPNa8BvicvgELEoUfoZqzf4KNgIyoaC5A8MogOJntEVZOVZjkWr/BWEFabUqvkdHCzKymmki3/IEn4ap4n5QhrYx6AtlI0FBvCBr0WgHwEmhFKBESC/I481cn3aZuQCQdGNgvM4k/WCdbhb9uTuNR2bMtWEtep4zz+RKlpHO5RrdPlQgHoS8DwfIGHvjJHcKA5+qd4rGC2/UETEa55g9fO+UdVlXrtvLH3r7+Si6mQcgGwTQuTLnv+K97ejiFa3G/NA3fneSQ6aHNgNBXjl//HGfWk=) 2026-03-27 00:22:05.251643 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIAP23WG24CBEYgUNOtbl86FIIFv8J8mOc1Dp9/T55a+f9yjeZQR0e6q9GciNnXLh6xMYqRKp5YIXsb5cPQzAeQ=) 2026-03-27 00:22:05.251654 | orchestrator | 2026-03-27 00:22:05.251665 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:22:05.251676 | orchestrator | Friday 27 March 2026 00:22:02 +0000 (0:00:00.988) 0:00:20.477 ********** 2026-03-27 00:22:05.251687 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGyvJ3jM6S3YKbCm8HMZU25f6KTdBZIMV9pph/aTtFzE) 2026-03-27 00:22:05.251698 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDINvQ+1dg1+P8ejv0EQaqPLXUQAVUxaWPnDBMN5WWUpARzzNsoEntwzXvkaPXrdqoXYRRe5tj5A0kn2xFc8iG+f5X3W+jUHozLGkGisu8CjGqXiWkrHRmjIU/inTLP/nhi3AjJm+RK4sBLnTH64s1ObiPPv+ecXq6rL610qjAHbNh7YP6NMA0R5+BJvT57dLOA0fs+kbt6WzUrKZx55tOCEwqDxRxzctxRisHDS6slytUj5OBt20+tu3Jmz6ZA8Q3YgX6TS95fqTqLBN2VGGFgBCgvu8p/u2r8aMRG++UsVJ8pmWhLxrBTYgLOm+tgqKJw1ehjk/ijvRwdqpkU4dnLJ5+5UTy7dGh6mq0n1WYW4nqVrmk8enemYj/cpMcTbnAQip6Iux/0+brvNax9u6B0DhbYEKbOv+IgDCpKTTPIGjxoQ9tr+b6G73NCt/OqizWRCHN5YJkkdNkJpYxk/30eeS+58GRMFV5oeHVzcRxqZRJISy1q6tcG+hSEoKwmS6s=) 2026-03-27 00:22:05.251717 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKUhPSK+n4MxZENf9RAkQ1/cv7GbzygtI23SW5KH7agKQmPbsfcnOWOVD87jhVwvmufXh0aQFrynrRIMmQL8uAw=) 2026-03-27 00:22:05.251728 | orchestrator | 2026-03-27 00:22:05.251739 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:22:05.251750 | orchestrator | Friday 27 March 2026 00:22:03 +0000 (0:00:01.012) 0:00:21.490 ********** 2026-03-27 00:22:05.251760 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIQLCAfPk6qXNdC/BlN8Pj5QPozODUElU0EZQf0utSDSm1dKeOT9or84w39NSpWWHHggRgLxAw5VnmQbBq18nNs=) 2026-03-27 00:22:05.251772 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkd9NuarOkue9sCUO0bkiUADmXgcvmt5kI+VWiXUReQkM/XS2/SYJifFcdi8huviqGhHZXXt83ScU3R2ODGJFg+TUjz4XbEGCTJ5ZR87So1TIxzMragWZ1gXev/KXx+zwgVLu1D+TgGRglXnIgBIzLyRMdfQL7tjgwsCqjAT2pT8BRDU/53mwZoHSHWc0/qnLVcSBqUT5KY2JgGZEhbwaXTysn04vZ5z39G56mPnHy/hbYgRWVhHKyDDYJ/Ebkr8nRTMuGsQlF99tguLZQSh1MaIGFRcj76buuJB6RwQrFe8NDI9HEGS3NuvauqO7OE3S68wZYdpfS/CCpmeeamkX7yVQDCpxXANsEv7AWdQd02v2P6sTUHI67j3DpNS3/9Aox15XY1C78KvreTI9MkxUSeyv6G7ozDLzMr+dNwNYdgAh0MRV6uOC4biVfUI4/y0sKdMGDusIXuyFcjKFFhbff5PZ5TZvPcDYr0G5lGX666xjFJAsL5LdPFaqscac+qhs=) 2026-03-27 00:22:05.251784 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHdopSii2BYyWjvOG4NLg0PVJwzvw363dpvelYlaaNzR) 2026-03-27 00:22:05.251795 | orchestrator | 2026-03-27 00:22:05.251805 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:22:05.251816 | orchestrator | Friday 27 March 2026 00:22:04 +0000 (0:00:01.011) 0:00:22.501 ********** 2026-03-27 00:22:05.251827 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOczQhuzp9Z5PaugXe3guSZMJW+gRDVjGcanRnkPIeMHcKwE9SStPO5s7qR0RC/A5xjNKO2Pfo5SJ0/tv4xzwbc=) 2026-03-27 00:22:05.251863 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsG48ptFs30GE3glhRYzRGi8aEAxUb2eQyZBowsPZl9oTuq4yz9x6C5Kro9LO52dASKTZTevXVBXBbMoVwsXQfuwTMkYwssFM3Gww2b41Ipq+mKpctqTWPQFmMgD0c1wpycrA72tACPfKcmUl0A6hhGsFH0mUunt72K9ht/+VKYOydnHG2S9o9jIT53Fbq75G7w8sbc0hSS+DE76L0i50VI6tSnJBozoGEWKEjTIzxMOWx3lksGPugbBcA4iQINouGBw5HQY8lOUUN0aR1GOKGbwUFPOS9q4FODqej+9jpFED1Q1a9UjwwppF5DZ5eNZ+HhIXGynZxX+mq9L4m7aaY4rvMkHGCk880dgWFEJ9/DEhY/ANfL4+4NpHEl8NJYaLx7Rz46S0rOcPMfY/W0QEKjSVhy2xcdL+MoHlFYAPMDNBb0hzoya4m+hYvvItLNtYYJjGF2vbsrMD7sx6cc0kubIgoGB/72cpQ7YimWUHBogMZNrUbZRhfIOrU+DbACRE=) 2026-03-27 00:22:09.537148 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5UBLja7/1KGBB7T6wN1J+T8jpSEN8I+c4txeSbUOaz) 2026-03-27 00:22:09.537251 | orchestrator | 2026-03-27 00:22:09.537268 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:22:09.537281 | orchestrator | Friday 27 March 2026 00:22:05 +0000 (0:00:00.996) 0:00:23.498 ********** 2026-03-27 00:22:09.537294 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDV0TeqcpSvFor4L3Oy905TT8spQeEdMFf1ka+Yzp9j6wt4lGWsclGxg84BpePc/tjGT6ri7oGjsSOIx8ZRzYGR1LIgpSn6nZRV2SJNrhzRfmS5neh4AmGVTUPtQrD7cSuVxpArw4YbfyU+0R5W3zT1QtCVnO0s4XIyFlXjVMtVODqF//U8lgFAlNdo5AIrPt3cCNyv3NKdKLO1g/jqoJYU3oOkgtXg5nvlZpydKCpwhoO9IizxYMKMQndpZtzudBH6Ro7E/6vtOnideNVqyMzHZqdD3BXCbH4rP4EdoCOfRhH5yfLMCWBD9nm/0znbmlNI+4rMhl/4e0KYZYD+P8eJpb0cLXqxxX0Vu9yUn0+y0PNxv50WSfR7g3SFtVImp/Dnn66pZeHBRjbqhJIzpKGD/L7BlUt14iZ5SZJNBouhh6xelv3DdYwdJxUyuNVfSl5rBIftK6RT7L12vNOPHt9nXCuYLzcHAwAzUSmQfNVxEo1Dl500KAieF18Vk4qjivc=) 2026-03-27 00:22:09.537309 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKog2bWkxJcRkcodK0aolNv0UaSa35onpXa/iiO4QOeA) 2026-03-27 00:22:09.537348 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCkYmiqLIH9Lbs6K4u23iOOiRVcv5oB8zd1zW7TNF02pxh7/O5/9kJlbqgnbw501dv1e8NSlj7RZT5IpCoKa5pw=) 2026-03-27 00:22:09.537362 | orchestrator | 2026-03-27 00:22:09.537388 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:22:09.537399 | orchestrator | Friday 27 March 2026 00:22:06 +0000 (0:00:01.020) 0:00:24.519 ********** 2026-03-27 00:22:09.537411 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4njvg+/0sQAVAqu5FyxHXTA5TxJfWZ5gIl8praf2tzubqUTdWeAPqJ1Sb+UofdaId0aSpx6YUS6hdhmTL+8xWsvoLZNnPJe/5o9768uJiD6NHN6njlA6fmlAo92aoGZyBcES9HZk2DXqPXGCoOrUL7Ox2CNq9rpuNUzslxfCaevalfxfFSzRNMX/EFNhNHFn4hdmnZQ7/2aEidFhFer7DxiUJsv483KZNh21WZtubLIZAPOoJxYRFs5hkevok8SrjNSXOBDyxl1ZUNi4gnNdYqzXQMThotVvksXEizeSt+kD6W+D/gNcNaGTgEH3sRCHF+/zcDH4moxd6VEYZeo5vnYDvl1ZKw2H/0PHmxIwab9l8gwmBLAOdVgLMzWozVyVoZsCepbV5Z1mO6AVN/yo9esMFkSDT5gTnI/VbSYsJgh76FcaJ/yuoWbK5rdM19OU/YsNMRsF+IsB47vDd6JLiv+sLTj/+PKCrA9IhVJETQS7Tt6/7PdazUfTXhrVJNL0=) 2026-03-27 00:22:09.537422 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP6seNez+C3PK1uujEzyYz5EpcsLWU5I5/5l8RAx5ROLdrs00ji1zF8fTWIr/4pQyzAtoUVYARr5LQz1c28CY7I=) 2026-03-27 00:22:09.537433 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF3++A+Al7hIKIuUHEQ0mGoc6xMO4LdhJQKpWTUOYApb) 2026-03-27 00:22:09.537444 | orchestrator | 2026-03-27 00:22:09.537455 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-27 00:22:09.537466 | orchestrator | Friday 27 March 2026 00:22:07 +0000 (0:00:00.978) 0:00:25.498 ********** 2026-03-27 00:22:09.537476 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDvH49G7kc+QLU+YECJFrbykrG2+m2ucJkAASae5/uX/R/I9yMoYp65yS4MMK2h6VpNoqTAEN58rAfukfHSSE0wUcN0oK8tvBILpCGEhpm18nIAnfP7dzK6f1csia2WPXYq6zCOCRh3ThB8+rBkF2hhjhSYGorwmsDZLVBUtCvndtuBsmvndqyaL1YGJMA6lMNmDvaK+acrpsoiwzYFanFpICUhAV88/BYyfL4tJLxZcnmrMDOg1Btq0a31/B8aD3LB9kXcq2L4Dq5psCvfTpMR4sMKNvKI58pKjpyoMLa4MsVMPQgLE9EAZ2hkPN4Q7nXbG7UkW7levp2B0oeBcLddOCJwPrbZcUS3I82dR2mLgPBWPILp6dNOaTh6YiYYYCsuZ3e/C9sh8uQXySC+8vb7Zn0tpLTlOeIK6BtT7SVciaej6DXEUgf7ifl/nd9QpFJ9o1beDD9FbEMqab28o/DeqPdqWE3DYY1eGpYES4k/3Um/xbZsirIcKFv7yYaEnz8=) 2026-03-27 00:22:09.537487 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAqlUsSLb/Tcx+Han0dYWuJJadPirf2HUQRY3g428c36BT5krslMcArA9ymNN9nxJTtNiTGTyiz5YUtusqldmhk=) 2026-03-27 00:22:09.537498 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJSrnJTTijKOYcLQ2ilEKhx/n2cU4nY4UwfrJ2lsafmB) 2026-03-27 00:22:09.537509 | orchestrator | 2026-03-27 00:22:09.537520 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-27 00:22:09.537531 | orchestrator | Friday 27 March 2026 00:22:08 +0000 (0:00:01.048) 0:00:26.546 ********** 2026-03-27 00:22:09.537542 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-27 00:22:09.537553 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-27 00:22:09.537564 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-27 00:22:09.537574 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-27 00:22:09.537585 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-27 00:22:09.537617 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-27 00:22:09.537631 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-27 00:22:09.537644 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:22:09.537658 | orchestrator | 2026-03-27 00:22:09.537671 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-27 00:22:09.537684 | orchestrator | Friday 27 March 2026 00:22:08 +0000 (0:00:00.172) 0:00:26.718 ********** 2026-03-27 00:22:09.537705 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:22:09.537718 | orchestrator | 2026-03-27 00:22:09.537732 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-27 00:22:09.537744 | orchestrator | Friday 27 March 2026 00:22:08 +0000 (0:00:00.038) 0:00:26.757 ********** 2026-03-27 00:22:09.537755 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:22:09.537766 | orchestrator | 2026-03-27 00:22:09.537777 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-27 00:22:09.537788 | orchestrator | Friday 27 March 2026 00:22:08 +0000 (0:00:00.039) 0:00:26.796 ********** 2026-03-27 00:22:09.537798 | orchestrator | changed: [testbed-manager] 2026-03-27 00:22:09.537809 | orchestrator | 2026-03-27 00:22:09.537820 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:22:09.537831 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 00:22:09.537843 | orchestrator | 2026-03-27 00:22:09.537854 | orchestrator | 2026-03-27 00:22:09.537865 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:22:09.537876 | orchestrator | Friday 27 March 2026 00:22:09 +0000 (0:00:00.464) 0:00:27.261 ********** 2026-03-27 00:22:09.537887 | orchestrator | =============================================================================== 2026-03-27 00:22:09.537897 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.19s 2026-03-27 00:22:09.537908 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.22s 2026-03-27 00:22:09.537920 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-03-27 00:22:09.537931 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-27 00:22:09.537942 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-27 00:22:09.537952 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-27 00:22:09.537963 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-27 00:22:09.537974 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-27 00:22:09.538008 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-27 00:22:09.538076 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-27 00:22:09.538088 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-27 00:22:09.538099 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-27 00:22:09.538109 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-27 00:22:09.538120 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-27 00:22:09.538139 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-27 00:22:09.538150 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-27 00:22:09.538161 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.47s 2026-03-27 00:22:09.538171 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.20s 2026-03-27 00:22:09.538182 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-03-27 00:22:09.538193 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-03-27 00:22:09.709257 | orchestrator | + osism apply squid 2026-03-27 00:22:20.948655 | orchestrator | 2026-03-27 00:22:20 | INFO  | Prepare task for execution of squid. 2026-03-27 00:22:21.016859 | orchestrator | 2026-03-27 00:22:21 | INFO  | Task 58faf70d-dc3b-4dbc-bbc8-559089976b00 (squid) was prepared for execution. 2026-03-27 00:22:21.016963 | orchestrator | 2026-03-27 00:22:21 | INFO  | It takes a moment until task 58faf70d-dc3b-4dbc-bbc8-559089976b00 (squid) has been started and output is visible here. 2026-03-27 00:24:21.172302 | orchestrator | 2026-03-27 00:24:21.172447 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-27 00:24:21.172465 | orchestrator | 2026-03-27 00:24:21.172479 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-27 00:24:21.172491 | orchestrator | Friday 27 March 2026 00:22:23 +0000 (0:00:00.190) 0:00:00.190 ********** 2026-03-27 00:24:21.172503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-27 00:24:21.172515 | orchestrator | 2026-03-27 00:24:21.172526 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-27 00:24:21.172538 | orchestrator | Friday 27 March 2026 00:22:24 +0000 (0:00:00.080) 0:00:00.271 ********** 2026-03-27 00:24:21.172549 | orchestrator | ok: [testbed-manager] 2026-03-27 00:24:21.172561 | orchestrator | 2026-03-27 00:24:21.172572 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-27 00:24:21.172583 | orchestrator | Friday 27 March 2026 00:22:26 +0000 (0:00:02.295) 0:00:02.567 ********** 2026-03-27 00:24:21.172595 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-27 00:24:21.172605 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-27 00:24:21.172617 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-27 00:24:21.172628 | orchestrator | 2026-03-27 00:24:21.172639 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-27 00:24:21.172650 | orchestrator | Friday 27 March 2026 00:22:27 +0000 (0:00:01.221) 0:00:03.788 ********** 2026-03-27 00:24:21.172661 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-27 00:24:21.172672 | orchestrator | 2026-03-27 00:24:21.172683 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-27 00:24:21.172694 | orchestrator | Friday 27 March 2026 00:22:28 +0000 (0:00:01.085) 0:00:04.874 ********** 2026-03-27 00:24:21.172705 | orchestrator | ok: [testbed-manager] 2026-03-27 00:24:21.172716 | orchestrator | 2026-03-27 00:24:21.172727 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-27 00:24:21.172738 | orchestrator | Friday 27 March 2026 00:22:29 +0000 (0:00:00.333) 0:00:05.207 ********** 2026-03-27 00:24:21.172749 | orchestrator | changed: [testbed-manager] 2026-03-27 00:24:21.172760 | orchestrator | 2026-03-27 00:24:21.172771 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-27 00:24:21.172782 | orchestrator | Friday 27 March 2026 00:22:29 +0000 (0:00:00.926) 0:00:06.134 ********** 2026-03-27 00:24:21.172793 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-27 00:24:21.172805 | orchestrator | ok: [testbed-manager] 2026-03-27 00:24:21.172816 | orchestrator | 2026-03-27 00:24:21.172827 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-27 00:24:21.172838 | orchestrator | Friday 27 March 2026 00:23:08 +0000 (0:00:38.430) 0:00:44.565 ********** 2026-03-27 00:24:21.172850 | orchestrator | changed: [testbed-manager] 2026-03-27 00:24:21.172864 | orchestrator | 2026-03-27 00:24:21.172894 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-27 00:24:21.172907 | orchestrator | Friday 27 March 2026 00:23:20 +0000 (0:00:11.897) 0:00:56.463 ********** 2026-03-27 00:24:21.172920 | orchestrator | Pausing for 60 seconds 2026-03-27 00:24:21.172933 | orchestrator | changed: [testbed-manager] 2026-03-27 00:24:21.172945 | orchestrator | 2026-03-27 00:24:21.172958 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-27 00:24:21.172993 | orchestrator | Friday 27 March 2026 00:24:20 +0000 (0:01:00.075) 0:01:56.538 ********** 2026-03-27 00:24:21.173005 | orchestrator | ok: [testbed-manager] 2026-03-27 00:24:21.173018 | orchestrator | 2026-03-27 00:24:21.173030 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-27 00:24:21.173066 | orchestrator | Friday 27 March 2026 00:24:20 +0000 (0:00:00.069) 0:01:56.608 ********** 2026-03-27 00:24:21.173079 | orchestrator | changed: [testbed-manager] 2026-03-27 00:24:21.173091 | orchestrator | 2026-03-27 00:24:21.173104 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:24:21.173117 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:24:21.173129 | orchestrator | 2026-03-27 00:24:21.173141 | orchestrator | 2026-03-27 00:24:21.173154 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:24:21.173165 | orchestrator | Friday 27 March 2026 00:24:20 +0000 (0:00:00.573) 0:01:57.182 ********** 2026-03-27 00:24:21.173179 | orchestrator | =============================================================================== 2026-03-27 00:24:21.173191 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-27 00:24:21.173203 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 38.43s 2026-03-27 00:24:21.173215 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.90s 2026-03-27 00:24:21.173226 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.30s 2026-03-27 00:24:21.173237 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-03-27 00:24:21.173247 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2026-03-27 00:24:21.173258 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-03-27 00:24:21.173269 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.57s 2026-03-27 00:24:21.173279 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-03-27 00:24:21.173290 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-27 00:24:21.173301 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-27 00:24:21.341838 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-27 00:24:21.341932 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-27 00:24:21.349146 | orchestrator | + set -e 2026-03-27 00:24:21.349180 | orchestrator | + NAMESPACE=kolla 2026-03-27 00:24:21.349193 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-27 00:24:21.355643 | orchestrator | ++ semver latest 9.0.0 2026-03-27 00:24:21.413645 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-27 00:24:21.413737 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-27 00:24:21.414239 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-27 00:24:32.695306 | orchestrator | 2026-03-27 00:24:32 | INFO  | Prepare task for execution of operator. 2026-03-27 00:24:32.766368 | orchestrator | 2026-03-27 00:24:32 | INFO  | Task 4ca39f2e-6c9f-4a4d-b871-34c793b0000a (operator) was prepared for execution. 2026-03-27 00:24:32.766461 | orchestrator | 2026-03-27 00:24:32 | INFO  | It takes a moment until task 4ca39f2e-6c9f-4a4d-b871-34c793b0000a (operator) has been started and output is visible here. 2026-03-27 00:24:47.908864 | orchestrator | 2026-03-27 00:24:47.908997 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-27 00:24:47.909016 | orchestrator | 2026-03-27 00:24:47.909029 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 00:24:47.909040 | orchestrator | Friday 27 March 2026 00:24:35 +0000 (0:00:00.182) 0:00:00.182 ********** 2026-03-27 00:24:47.909052 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:24:47.909064 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:24:47.909076 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:24:47.909086 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:24:47.909097 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:24:47.909107 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:24:47.909122 | orchestrator | 2026-03-27 00:24:47.909133 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-27 00:24:47.909169 | orchestrator | Friday 27 March 2026 00:24:39 +0000 (0:00:03.509) 0:00:03.691 ********** 2026-03-27 00:24:47.909180 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:24:47.909191 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:24:47.909201 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:24:47.909212 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:24:47.909222 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:24:47.909233 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:24:47.909243 | orchestrator | 2026-03-27 00:24:47.909267 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-27 00:24:47.909279 | orchestrator | 2026-03-27 00:24:47.909290 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-27 00:24:47.909301 | orchestrator | Friday 27 March 2026 00:24:40 +0000 (0:00:00.828) 0:00:04.520 ********** 2026-03-27 00:24:47.909312 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:24:47.909322 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:24:47.909333 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:24:47.909343 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:24:47.909354 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:24:47.909364 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:24:47.909375 | orchestrator | 2026-03-27 00:24:47.909386 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-27 00:24:47.909397 | orchestrator | Friday 27 March 2026 00:24:40 +0000 (0:00:00.162) 0:00:04.682 ********** 2026-03-27 00:24:47.909407 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:24:47.909419 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:24:47.909432 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:24:47.909444 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:24:47.909456 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:24:47.909469 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:24:47.909481 | orchestrator | 2026-03-27 00:24:47.909510 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-27 00:24:47.909522 | orchestrator | Friday 27 March 2026 00:24:40 +0000 (0:00:00.153) 0:00:04.836 ********** 2026-03-27 00:24:47.909535 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:24:47.909548 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:24:47.909561 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:24:47.909573 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:24:47.909585 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:24:47.909597 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:24:47.909609 | orchestrator | 2026-03-27 00:24:47.909622 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-27 00:24:47.909634 | orchestrator | Friday 27 March 2026 00:24:41 +0000 (0:00:00.689) 0:00:05.525 ********** 2026-03-27 00:24:47.909647 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:24:47.909659 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:24:47.909672 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:24:47.909684 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:24:47.909696 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:24:47.909708 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:24:47.909720 | orchestrator | 2026-03-27 00:24:47.909732 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-27 00:24:47.909744 | orchestrator | Friday 27 March 2026 00:24:42 +0000 (0:00:00.886) 0:00:06.411 ********** 2026-03-27 00:24:47.909758 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-27 00:24:47.909770 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-27 00:24:47.909781 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-27 00:24:47.909797 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-27 00:24:47.909816 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-27 00:24:47.909836 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-27 00:24:47.909854 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-27 00:24:47.909875 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-27 00:24:47.909891 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-27 00:24:47.909913 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-27 00:24:47.909923 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-27 00:24:47.909934 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-27 00:24:47.909945 | orchestrator | 2026-03-27 00:24:47.909956 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-27 00:24:47.909986 | orchestrator | Friday 27 March 2026 00:24:43 +0000 (0:00:01.163) 0:00:07.575 ********** 2026-03-27 00:24:47.909997 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:24:47.910007 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:24:47.910067 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:24:47.910079 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:24:47.910090 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:24:47.910101 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:24:47.910112 | orchestrator | 2026-03-27 00:24:47.910122 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-27 00:24:47.910135 | orchestrator | Friday 27 March 2026 00:24:44 +0000 (0:00:01.285) 0:00:08.861 ********** 2026-03-27 00:24:47.910145 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-27 00:24:47.910156 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-27 00:24:47.910167 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-27 00:24:47.910178 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-27 00:24:47.910189 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-27 00:24:47.910220 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-27 00:24:47.910231 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-27 00:24:47.910242 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-27 00:24:47.910253 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-27 00:24:47.910263 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-27 00:24:47.910274 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-27 00:24:47.910285 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-27 00:24:47.910295 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-27 00:24:47.910306 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-27 00:24:47.910317 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-27 00:24:47.910328 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-27 00:24:47.910338 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-27 00:24:47.910349 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-27 00:24:47.910360 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-27 00:24:47.910370 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-27 00:24:47.910381 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-27 00:24:47.910392 | orchestrator | 2026-03-27 00:24:47.910403 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-27 00:24:47.910415 | orchestrator | Friday 27 March 2026 00:24:45 +0000 (0:00:01.335) 0:00:10.196 ********** 2026-03-27 00:24:47.910425 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:24:47.910436 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:24:47.910447 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:24:47.910463 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:24:47.910474 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:24:47.910485 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:24:47.910496 | orchestrator | 2026-03-27 00:24:47.910506 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-27 00:24:47.910527 | orchestrator | Friday 27 March 2026 00:24:46 +0000 (0:00:00.170) 0:00:10.367 ********** 2026-03-27 00:24:47.910539 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:24:47.910549 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:24:47.910560 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:24:47.910571 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:24:47.910581 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:24:47.910592 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:24:47.910602 | orchestrator | 2026-03-27 00:24:47.910613 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-27 00:24:47.910624 | orchestrator | Friday 27 March 2026 00:24:46 +0000 (0:00:00.173) 0:00:10.540 ********** 2026-03-27 00:24:47.910635 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:24:47.910645 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:24:47.910656 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:24:47.910667 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:24:47.910677 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:24:47.910688 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:24:47.910698 | orchestrator | 2026-03-27 00:24:47.910709 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-27 00:24:47.910720 | orchestrator | Friday 27 March 2026 00:24:46 +0000 (0:00:00.538) 0:00:11.079 ********** 2026-03-27 00:24:47.910730 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:24:47.910741 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:24:47.910752 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:24:47.910763 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:24:47.910773 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:24:47.910784 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:24:47.910795 | orchestrator | 2026-03-27 00:24:47.910805 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-27 00:24:47.910816 | orchestrator | Friday 27 March 2026 00:24:46 +0000 (0:00:00.180) 0:00:11.260 ********** 2026-03-27 00:24:47.910827 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-27 00:24:47.910838 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:24:47.910848 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-27 00:24:47.910859 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-27 00:24:47.910870 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:24:47.910880 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:24:47.910891 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-27 00:24:47.910901 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:24:47.910912 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-27 00:24:47.910923 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:24:47.910933 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-27 00:24:47.910944 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:24:47.910954 | orchestrator | 2026-03-27 00:24:47.910983 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-27 00:24:47.910994 | orchestrator | Friday 27 March 2026 00:24:47 +0000 (0:00:00.691) 0:00:11.951 ********** 2026-03-27 00:24:47.911004 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:24:47.911015 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:24:47.911026 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:24:47.911036 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:24:47.911047 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:24:47.911057 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:24:47.911068 | orchestrator | 2026-03-27 00:24:47.911079 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-27 00:24:47.911089 | orchestrator | Friday 27 March 2026 00:24:47 +0000 (0:00:00.150) 0:00:12.102 ********** 2026-03-27 00:24:47.911100 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:24:47.911111 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:24:47.911121 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:24:47.911132 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:24:47.911157 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:24:49.261328 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:24:49.261427 | orchestrator | 2026-03-27 00:24:49.261444 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-27 00:24:49.261457 | orchestrator | Friday 27 March 2026 00:24:47 +0000 (0:00:00.168) 0:00:12.270 ********** 2026-03-27 00:24:49.261468 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:24:49.261479 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:24:49.261490 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:24:49.261501 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:24:49.261511 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:24:49.261522 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:24:49.261533 | orchestrator | 2026-03-27 00:24:49.261543 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-27 00:24:49.261554 | orchestrator | Friday 27 March 2026 00:24:48 +0000 (0:00:00.162) 0:00:12.433 ********** 2026-03-27 00:24:49.261564 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:24:49.261575 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:24:49.261585 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:24:49.261596 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:24:49.261606 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:24:49.261617 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:24:49.261627 | orchestrator | 2026-03-27 00:24:49.261638 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-27 00:24:49.261649 | orchestrator | Friday 27 March 2026 00:24:48 +0000 (0:00:00.692) 0:00:13.126 ********** 2026-03-27 00:24:49.261659 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:24:49.261670 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:24:49.261680 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:24:49.261691 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:24:49.261701 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:24:49.261711 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:24:49.261722 | orchestrator | 2026-03-27 00:24:49.261733 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:24:49.261745 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 00:24:49.261757 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 00:24:49.261789 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 00:24:49.261800 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 00:24:49.261811 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 00:24:49.261822 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 00:24:49.261833 | orchestrator | 2026-03-27 00:24:49.261843 | orchestrator | 2026-03-27 00:24:49.261854 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:24:49.261867 | orchestrator | Friday 27 March 2026 00:24:49 +0000 (0:00:00.302) 0:00:13.428 ********** 2026-03-27 00:24:49.261880 | orchestrator | =============================================================================== 2026-03-27 00:24:49.261891 | orchestrator | Gathering Facts --------------------------------------------------------- 3.51s 2026-03-27 00:24:49.261903 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.34s 2026-03-27 00:24:49.261916 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2026-03-27 00:24:49.261950 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2026-03-27 00:24:49.262004 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2026-03-27 00:24:49.262072 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2026-03-27 00:24:49.262087 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2026-03-27 00:24:49.262099 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-03-27 00:24:49.262111 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.69s 2026-03-27 00:24:49.262122 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2026-03-27 00:24:49.262134 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.30s 2026-03-27 00:24:49.262146 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-27 00:24:49.262158 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-03-27 00:24:49.262171 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-03-27 00:24:49.262183 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-03-27 00:24:49.262196 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-03-27 00:24:49.262208 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-27 00:24:49.262220 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-03-27 00:24:49.262231 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-27 00:24:49.461448 | orchestrator | + osism apply --environment custom facts 2026-03-27 00:24:50.810452 | orchestrator | 2026-03-27 00:24:50 | INFO  | Trying to run play facts in environment custom 2026-03-27 00:25:00.985904 | orchestrator | 2026-03-27 00:25:00 | INFO  | Prepare task for execution of facts. 2026-03-27 00:25:01.064326 | orchestrator | 2026-03-27 00:25:01 | INFO  | Task 987af834-01fd-449e-9dac-374b5e96f118 (facts) was prepared for execution. 2026-03-27 00:25:01.064415 | orchestrator | 2026-03-27 00:25:01 | INFO  | It takes a moment until task 987af834-01fd-449e-9dac-374b5e96f118 (facts) has been started and output is visible here. 2026-03-27 00:25:45.580912 | orchestrator | 2026-03-27 00:25:45.581059 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-27 00:25:45.581075 | orchestrator | 2026-03-27 00:25:45.581085 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-27 00:25:45.581094 | orchestrator | Friday 27 March 2026 00:25:04 +0000 (0:00:00.116) 0:00:00.116 ********** 2026-03-27 00:25:45.581103 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:25:45.581112 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:25:45.581121 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:25:45.581130 | orchestrator | ok: [testbed-manager] 2026-03-27 00:25:45.581139 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:25:45.581148 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:25:45.581156 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:25:45.581165 | orchestrator | 2026-03-27 00:25:45.581174 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-27 00:25:45.581182 | orchestrator | Friday 27 March 2026 00:25:05 +0000 (0:00:01.458) 0:00:01.575 ********** 2026-03-27 00:25:45.581191 | orchestrator | ok: [testbed-manager] 2026-03-27 00:25:45.581199 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:25:45.581208 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:25:45.581217 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:25:45.581226 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:25:45.581249 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:25:45.581258 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:25:45.581267 | orchestrator | 2026-03-27 00:25:45.581296 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-27 00:25:45.581306 | orchestrator | 2026-03-27 00:25:45.581314 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-27 00:25:45.581323 | orchestrator | Friday 27 March 2026 00:25:06 +0000 (0:00:01.257) 0:00:02.832 ********** 2026-03-27 00:25:45.581331 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:25:45.581340 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:25:45.581348 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:25:45.581357 | orchestrator | 2026-03-27 00:25:45.581366 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-27 00:25:45.581375 | orchestrator | Friday 27 March 2026 00:25:06 +0000 (0:00:00.102) 0:00:02.935 ********** 2026-03-27 00:25:45.581384 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:25:45.581392 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:25:45.581401 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:25:45.581409 | orchestrator | 2026-03-27 00:25:45.581418 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-27 00:25:45.581426 | orchestrator | Friday 27 March 2026 00:25:07 +0000 (0:00:00.217) 0:00:03.153 ********** 2026-03-27 00:25:45.581435 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:25:45.581443 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:25:45.581452 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:25:45.581462 | orchestrator | 2026-03-27 00:25:45.581472 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-27 00:25:45.581482 | orchestrator | Friday 27 March 2026 00:25:07 +0000 (0:00:00.219) 0:00:03.372 ********** 2026-03-27 00:25:45.581493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:25:45.581504 | orchestrator | 2026-03-27 00:25:45.581514 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-27 00:25:45.581524 | orchestrator | Friday 27 March 2026 00:25:07 +0000 (0:00:00.138) 0:00:03.511 ********** 2026-03-27 00:25:45.581534 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:25:45.581543 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:25:45.581553 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:25:45.581562 | orchestrator | 2026-03-27 00:25:45.581572 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-27 00:25:45.581582 | orchestrator | Friday 27 March 2026 00:25:07 +0000 (0:00:00.395) 0:00:03.906 ********** 2026-03-27 00:25:45.581592 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:25:45.581602 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:25:45.581612 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:25:45.581622 | orchestrator | 2026-03-27 00:25:45.581632 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-27 00:25:45.581642 | orchestrator | Friday 27 March 2026 00:25:08 +0000 (0:00:00.144) 0:00:04.050 ********** 2026-03-27 00:25:45.581652 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:25:45.581662 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:25:45.581671 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:25:45.581681 | orchestrator | 2026-03-27 00:25:45.581691 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-27 00:25:45.581700 | orchestrator | Friday 27 March 2026 00:25:09 +0000 (0:00:01.011) 0:00:05.062 ********** 2026-03-27 00:25:45.581710 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:25:45.581720 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:25:45.581729 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:25:45.581739 | orchestrator | 2026-03-27 00:25:45.581750 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-27 00:25:45.581760 | orchestrator | Friday 27 March 2026 00:25:09 +0000 (0:00:00.422) 0:00:05.484 ********** 2026-03-27 00:25:45.581769 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:25:45.581779 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:25:45.581789 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:25:45.581799 | orchestrator | 2026-03-27 00:25:45.581815 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-27 00:25:45.581839 | orchestrator | Friday 27 March 2026 00:25:10 +0000 (0:00:01.027) 0:00:06.512 ********** 2026-03-27 00:25:45.581857 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:25:45.581867 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:25:45.581875 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:25:45.581884 | orchestrator | 2026-03-27 00:25:45.581892 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-27 00:25:45.581901 | orchestrator | Friday 27 March 2026 00:25:27 +0000 (0:00:16.806) 0:00:23.319 ********** 2026-03-27 00:25:45.581910 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:25:45.581918 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:25:45.581927 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:25:45.581935 | orchestrator | 2026-03-27 00:25:45.581944 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-27 00:25:45.581987 | orchestrator | Friday 27 March 2026 00:25:27 +0000 (0:00:00.069) 0:00:23.388 ********** 2026-03-27 00:25:45.581997 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:25:45.582006 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:25:45.582065 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:25:45.582075 | orchestrator | 2026-03-27 00:25:45.582084 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-27 00:25:45.582093 | orchestrator | Friday 27 March 2026 00:25:35 +0000 (0:00:08.391) 0:00:31.780 ********** 2026-03-27 00:25:45.582101 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:25:45.582110 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:25:45.582119 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:25:45.582127 | orchestrator | 2026-03-27 00:25:45.582136 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-27 00:25:45.582145 | orchestrator | Friday 27 March 2026 00:25:36 +0000 (0:00:00.471) 0:00:32.252 ********** 2026-03-27 00:25:45.582153 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-27 00:25:45.582163 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-27 00:25:45.582171 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-27 00:25:45.582180 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-27 00:25:45.582189 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-27 00:25:45.582198 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-27 00:25:45.582207 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-27 00:25:45.582215 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-27 00:25:45.582224 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-27 00:25:45.582233 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-27 00:25:45.582242 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-27 00:25:45.582250 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-27 00:25:45.582259 | orchestrator | 2026-03-27 00:25:45.582268 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-27 00:25:45.582276 | orchestrator | Friday 27 March 2026 00:25:40 +0000 (0:00:03.907) 0:00:36.159 ********** 2026-03-27 00:25:45.582285 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:25:45.582293 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:25:45.582302 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:25:45.582311 | orchestrator | 2026-03-27 00:25:45.582319 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-27 00:25:45.582328 | orchestrator | 2026-03-27 00:25:45.582336 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-27 00:25:45.582345 | orchestrator | Friday 27 March 2026 00:25:41 +0000 (0:00:01.444) 0:00:37.603 ********** 2026-03-27 00:25:45.582354 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:25:45.582369 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:25:45.582378 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:25:45.582386 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:25:45.582395 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:25:45.582403 | orchestrator | ok: [testbed-manager] 2026-03-27 00:25:45.582412 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:25:45.582421 | orchestrator | 2026-03-27 00:25:45.582429 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:25:45.582473 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:25:45.582483 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:25:45.582494 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:25:45.582502 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:25:45.582511 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:25:45.582520 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:25:45.582529 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:25:45.582537 | orchestrator | 2026-03-27 00:25:45.582546 | orchestrator | 2026-03-27 00:25:45.582555 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:25:45.582564 | orchestrator | Friday 27 March 2026 00:25:45 +0000 (0:00:03.907) 0:00:41.511 ********** 2026-03-27 00:25:45.582572 | orchestrator | =============================================================================== 2026-03-27 00:25:45.582581 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.81s 2026-03-27 00:25:45.582590 | orchestrator | Install required packages (Debian) -------------------------------------- 8.39s 2026-03-27 00:25:45.582598 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.91s 2026-03-27 00:25:45.582607 | orchestrator | Copy fact files --------------------------------------------------------- 3.91s 2026-03-27 00:25:45.582615 | orchestrator | Create custom facts directory ------------------------------------------- 1.46s 2026-03-27 00:25:45.582624 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.44s 2026-03-27 00:25:45.582639 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2026-03-27 00:25:45.758647 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2026-03-27 00:25:45.758745 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.01s 2026-03-27 00:25:45.758761 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-03-27 00:25:45.758774 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.42s 2026-03-27 00:25:45.758785 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2026-03-27 00:25:45.758795 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-03-27 00:25:45.758806 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-03-27 00:25:45.758817 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-03-27 00:25:45.758828 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-03-27 00:25:45.758888 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-03-27 00:25:45.758902 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.07s 2026-03-27 00:25:45.933489 | orchestrator | + osism apply bootstrap 2026-03-27 00:25:57.271917 | orchestrator | 2026-03-27 00:25:57 | INFO  | Prepare task for execution of bootstrap. 2026-03-27 00:25:57.341083 | orchestrator | 2026-03-27 00:25:57 | INFO  | Task 3a20dec4-e325-4b9d-94d0-d743dd40cc4a (bootstrap) was prepared for execution. 2026-03-27 00:25:57.341188 | orchestrator | 2026-03-27 00:25:57 | INFO  | It takes a moment until task 3a20dec4-e325-4b9d-94d0-d743dd40cc4a (bootstrap) has been started and output is visible here. 2026-03-27 00:26:14.041348 | orchestrator | 2026-03-27 00:26:14.041457 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-27 00:26:14.041473 | orchestrator | 2026-03-27 00:26:14.041485 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-27 00:26:14.041496 | orchestrator | Friday 27 March 2026 00:26:00 +0000 (0:00:00.203) 0:00:00.203 ********** 2026-03-27 00:26:14.041508 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:14.041519 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:14.041529 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:14.041538 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:14.041548 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:14.041557 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:14.041566 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:14.041575 | orchestrator | 2026-03-27 00:26:14.041585 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-27 00:26:14.041594 | orchestrator | 2026-03-27 00:26:14.041604 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-27 00:26:14.041613 | orchestrator | Friday 27 March 2026 00:26:00 +0000 (0:00:00.308) 0:00:00.512 ********** 2026-03-27 00:26:14.041623 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:14.041633 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:14.041642 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:14.041652 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:14.041661 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:14.041670 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:14.041680 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:14.041689 | orchestrator | 2026-03-27 00:26:14.041699 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-27 00:26:14.041708 | orchestrator | 2026-03-27 00:26:14.041718 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-27 00:26:14.041727 | orchestrator | Friday 27 March 2026 00:26:06 +0000 (0:00:05.621) 0:00:06.133 ********** 2026-03-27 00:26:14.041738 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-27 00:26:14.041748 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-27 00:26:14.041757 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-27 00:26:14.041766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-27 00:26:14.041776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-27 00:26:14.041786 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-27 00:26:14.041795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-27 00:26:14.041805 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-27 00:26:14.041814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-27 00:26:14.041823 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-27 00:26:14.041833 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-27 00:26:14.041842 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-27 00:26:14.041852 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-27 00:26:14.041861 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-27 00:26:14.041872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-27 00:26:14.041890 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-27 00:26:14.041939 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-27 00:26:14.041987 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-27 00:26:14.042001 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:26:14.042012 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-27 00:26:14.042073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-27 00:26:14.042084 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-27 00:26:14.042095 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-27 00:26:14.042105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-27 00:26:14.042114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-27 00:26:14.042124 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-27 00:26:14.042133 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:26:14.042143 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-27 00:26:14.042152 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-27 00:26:14.042162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-27 00:26:14.042171 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-27 00:26:14.042180 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-27 00:26:14.042190 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-27 00:26:14.042199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:26:14.042208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-27 00:26:14.042218 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-27 00:26:14.042227 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-27 00:26:14.042237 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-27 00:26:14.042247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:26:14.042256 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-27 00:26:14.042266 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-27 00:26:14.042275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:26:14.042285 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-27 00:26:14.042295 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-27 00:26:14.042304 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-27 00:26:14.042314 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:26:14.042341 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:26:14.042352 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-27 00:26:14.042361 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-27 00:26:14.042371 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-27 00:26:14.042380 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:26:14.042390 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-27 00:26:14.042399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-27 00:26:14.042408 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:26:14.042418 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-27 00:26:14.042427 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:26:14.042436 | orchestrator | 2026-03-27 00:26:14.042446 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-27 00:26:14.042455 | orchestrator | 2026-03-27 00:26:14.042465 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-27 00:26:14.042474 | orchestrator | Friday 27 March 2026 00:26:07 +0000 (0:00:00.435) 0:00:06.569 ********** 2026-03-27 00:26:14.042484 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:14.042493 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:14.042511 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:14.042521 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:14.042530 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:14.042540 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:14.042549 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:14.042559 | orchestrator | 2026-03-27 00:26:14.042568 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-27 00:26:14.042578 | orchestrator | Friday 27 March 2026 00:26:08 +0000 (0:00:01.392) 0:00:07.962 ********** 2026-03-27 00:26:14.042588 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:14.042597 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:14.042606 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:14.042616 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:14.042625 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:14.042634 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:14.042644 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:14.042653 | orchestrator | 2026-03-27 00:26:14.042663 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-27 00:26:14.042672 | orchestrator | Friday 27 March 2026 00:26:09 +0000 (0:00:01.177) 0:00:09.140 ********** 2026-03-27 00:26:14.042682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:26:14.042694 | orchestrator | 2026-03-27 00:26:14.042704 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-27 00:26:14.042714 | orchestrator | Friday 27 March 2026 00:26:09 +0000 (0:00:00.297) 0:00:09.438 ********** 2026-03-27 00:26:14.042723 | orchestrator | changed: [testbed-manager] 2026-03-27 00:26:14.042732 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:26:14.042742 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:26:14.042751 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:26:14.042761 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:26:14.042770 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:26:14.042779 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:26:14.042788 | orchestrator | 2026-03-27 00:26:14.042798 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-27 00:26:14.042807 | orchestrator | Friday 27 March 2026 00:26:11 +0000 (0:00:01.531) 0:00:10.969 ********** 2026-03-27 00:26:14.042817 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:26:14.042828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:26:14.042840 | orchestrator | 2026-03-27 00:26:14.042849 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-27 00:26:14.042859 | orchestrator | Friday 27 March 2026 00:26:11 +0000 (0:00:00.297) 0:00:11.266 ********** 2026-03-27 00:26:14.042868 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:26:14.042886 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:26:14.042904 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:26:14.042922 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:26:14.042941 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:26:14.042980 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:26:14.042990 | orchestrator | 2026-03-27 00:26:14.043000 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-27 00:26:14.043010 | orchestrator | Friday 27 March 2026 00:26:12 +0000 (0:00:01.060) 0:00:12.327 ********** 2026-03-27 00:26:14.043019 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:26:14.043028 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:26:14.043054 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:26:14.043064 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:26:14.043073 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:26:14.043082 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:26:14.043099 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:26:14.043108 | orchestrator | 2026-03-27 00:26:14.043118 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-27 00:26:14.043132 | orchestrator | Friday 27 March 2026 00:26:13 +0000 (0:00:00.608) 0:00:12.935 ********** 2026-03-27 00:26:14.043142 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:26:14.043151 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:26:14.043161 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:26:14.043170 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:26:14.043179 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:26:14.043188 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:26:14.043198 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:14.043207 | orchestrator | 2026-03-27 00:26:14.043217 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-27 00:26:14.043227 | orchestrator | Friday 27 March 2026 00:26:13 +0000 (0:00:00.503) 0:00:13.438 ********** 2026-03-27 00:26:14.043237 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:26:14.043246 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:26:14.043262 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:26:26.735242 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:26:26.735365 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:26:26.735381 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:26:26.735393 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:26:26.735404 | orchestrator | 2026-03-27 00:26:26.735417 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-27 00:26:26.735429 | orchestrator | Friday 27 March 2026 00:26:14 +0000 (0:00:00.220) 0:00:13.659 ********** 2026-03-27 00:26:26.735442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:26:26.735471 | orchestrator | 2026-03-27 00:26:26.735483 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-27 00:26:26.735494 | orchestrator | Friday 27 March 2026 00:26:14 +0000 (0:00:00.326) 0:00:13.986 ********** 2026-03-27 00:26:26.735505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:26:26.735517 | orchestrator | 2026-03-27 00:26:26.735528 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-27 00:26:26.735539 | orchestrator | Friday 27 March 2026 00:26:14 +0000 (0:00:00.360) 0:00:14.346 ********** 2026-03-27 00:26:26.735550 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.735561 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.735572 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.735583 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:26.735594 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:26.735605 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:26.735615 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:26.735626 | orchestrator | 2026-03-27 00:26:26.735637 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-27 00:26:26.735648 | orchestrator | Friday 27 March 2026 00:26:16 +0000 (0:00:01.370) 0:00:15.717 ********** 2026-03-27 00:26:26.735660 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:26:26.735671 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:26:26.735682 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:26:26.735692 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:26:26.735703 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:26:26.735716 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:26:26.735728 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:26:26.735746 | orchestrator | 2026-03-27 00:26:26.735768 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-27 00:26:26.735822 | orchestrator | Friday 27 March 2026 00:26:16 +0000 (0:00:00.254) 0:00:15.971 ********** 2026-03-27 00:26:26.735840 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.735854 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:26.735866 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:26.735878 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:26.735893 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.735912 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.735931 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:26.735998 | orchestrator | 2026-03-27 00:26:26.736019 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-27 00:26:26.736032 | orchestrator | Friday 27 March 2026 00:26:17 +0000 (0:00:00.638) 0:00:16.609 ********** 2026-03-27 00:26:26.736044 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:26:26.736057 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:26:26.736069 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:26:26.736080 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:26:26.736091 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:26:26.736101 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:26:26.736112 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:26:26.736122 | orchestrator | 2026-03-27 00:26:26.736133 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-27 00:26:26.736145 | orchestrator | Friday 27 March 2026 00:26:17 +0000 (0:00:00.292) 0:00:16.902 ********** 2026-03-27 00:26:26.736155 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:26:26.736166 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.736177 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:26:26.736187 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:26:26.736198 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:26:26.736208 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:26:26.736219 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:26:26.736229 | orchestrator | 2026-03-27 00:26:26.736240 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-27 00:26:26.736251 | orchestrator | Friday 27 March 2026 00:26:18 +0000 (0:00:00.656) 0:00:17.559 ********** 2026-03-27 00:26:26.736261 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.736272 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:26:26.736282 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:26:26.736293 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:26:26.736303 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:26:26.736314 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:26:26.736324 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:26:26.736335 | orchestrator | 2026-03-27 00:26:26.736356 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-27 00:26:26.736367 | orchestrator | Friday 27 March 2026 00:26:19 +0000 (0:00:01.188) 0:00:18.748 ********** 2026-03-27 00:26:26.736378 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.736388 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.736399 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.736410 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:26.736421 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:26.736431 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:26.736442 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:26.736452 | orchestrator | 2026-03-27 00:26:26.736463 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-27 00:26:26.736474 | orchestrator | Friday 27 March 2026 00:26:20 +0000 (0:00:01.158) 0:00:19.906 ********** 2026-03-27 00:26:26.736504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:26:26.736515 | orchestrator | 2026-03-27 00:26:26.736526 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-27 00:26:26.736537 | orchestrator | Friday 27 March 2026 00:26:20 +0000 (0:00:00.325) 0:00:20.232 ********** 2026-03-27 00:26:26.736557 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:26:26.736568 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:26:26.736579 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:26:26.736589 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:26:26.736600 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:26:26.736611 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:26:26.736621 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:26:26.736632 | orchestrator | 2026-03-27 00:26:26.736642 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-27 00:26:26.736653 | orchestrator | Friday 27 March 2026 00:26:22 +0000 (0:00:01.309) 0:00:21.542 ********** 2026-03-27 00:26:26.736664 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.736674 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:26.736685 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:26.736695 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:26.736706 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.736717 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.736727 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:26.736737 | orchestrator | 2026-03-27 00:26:26.736748 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-27 00:26:26.736759 | orchestrator | Friday 27 March 2026 00:26:22 +0000 (0:00:00.255) 0:00:21.797 ********** 2026-03-27 00:26:26.736770 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.736781 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:26.736791 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:26.736802 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:26.736812 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.736823 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.736833 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:26.736844 | orchestrator | 2026-03-27 00:26:26.736854 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-27 00:26:26.736865 | orchestrator | Friday 27 March 2026 00:26:22 +0000 (0:00:00.236) 0:00:22.034 ********** 2026-03-27 00:26:26.736876 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.736886 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:26.736897 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:26.736907 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:26.736918 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.736928 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.736939 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:26.736985 | orchestrator | 2026-03-27 00:26:26.736999 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-27 00:26:26.737010 | orchestrator | Friday 27 March 2026 00:26:22 +0000 (0:00:00.232) 0:00:22.266 ********** 2026-03-27 00:26:26.737022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:26:26.737035 | orchestrator | 2026-03-27 00:26:26.737045 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-27 00:26:26.737056 | orchestrator | Friday 27 March 2026 00:26:23 +0000 (0:00:00.281) 0:00:22.548 ********** 2026-03-27 00:26:26.737066 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.737077 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.737088 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.737098 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:26.737109 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:26.737119 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:26.737135 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:26.737154 | orchestrator | 2026-03-27 00:26:26.737175 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-27 00:26:26.737195 | orchestrator | Friday 27 March 2026 00:26:23 +0000 (0:00:00.649) 0:00:23.197 ********** 2026-03-27 00:26:26.737214 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:26:26.737233 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:26:26.737244 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:26:26.737255 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:26:26.737265 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:26:26.737276 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:26:26.737286 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:26:26.737297 | orchestrator | 2026-03-27 00:26:26.737323 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-27 00:26:26.737334 | orchestrator | Friday 27 March 2026 00:26:23 +0000 (0:00:00.236) 0:00:23.434 ********** 2026-03-27 00:26:26.737345 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.737355 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.737366 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.737376 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:26:26.737387 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:26:26.737397 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:26:26.737408 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:26.737430 | orchestrator | 2026-03-27 00:26:26.737442 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-27 00:26:26.737453 | orchestrator | Friday 27 March 2026 00:26:25 +0000 (0:00:01.148) 0:00:24.583 ********** 2026-03-27 00:26:26.737464 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.737475 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:26:26.737486 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:26:26.737496 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:26:26.737507 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.737517 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.737528 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:26:26.737538 | orchestrator | 2026-03-27 00:26:26.737549 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-27 00:26:26.737560 | orchestrator | Friday 27 March 2026 00:26:25 +0000 (0:00:00.585) 0:00:25.168 ********** 2026-03-27 00:26:26.737571 | orchestrator | ok: [testbed-manager] 2026-03-27 00:26:26.737581 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:26:26.737592 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:26:26.737602 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:26:26.737621 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:27:09.369939 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:27:09.370218 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.370249 | orchestrator | 2026-03-27 00:27:09.370262 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-27 00:27:09.370274 | orchestrator | Friday 27 March 2026 00:26:26 +0000 (0:00:01.147) 0:00:26.315 ********** 2026-03-27 00:27:09.370284 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.370294 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.370304 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.370313 | orchestrator | changed: [testbed-manager] 2026-03-27 00:27:09.370323 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:27:09.370333 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:27:09.370342 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:27:09.370352 | orchestrator | 2026-03-27 00:27:09.370362 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-27 00:27:09.370372 | orchestrator | Friday 27 March 2026 00:26:44 +0000 (0:00:17.620) 0:00:43.936 ********** 2026-03-27 00:27:09.370382 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.370392 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.370402 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.370412 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.370422 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.370438 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.370456 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.370472 | orchestrator | 2026-03-27 00:27:09.370488 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-27 00:27:09.370505 | orchestrator | Friday 27 March 2026 00:26:44 +0000 (0:00:00.246) 0:00:44.183 ********** 2026-03-27 00:27:09.370522 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.370569 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.370586 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.370603 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.370619 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.370637 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.370655 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.370674 | orchestrator | 2026-03-27 00:27:09.370692 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-27 00:27:09.370706 | orchestrator | Friday 27 March 2026 00:26:44 +0000 (0:00:00.255) 0:00:44.439 ********** 2026-03-27 00:27:09.370718 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.370729 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.370740 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.370751 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.370762 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.370773 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.370785 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.370796 | orchestrator | 2026-03-27 00:27:09.370807 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-27 00:27:09.370818 | orchestrator | Friday 27 March 2026 00:26:45 +0000 (0:00:00.203) 0:00:44.643 ********** 2026-03-27 00:27:09.370833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:27:09.370846 | orchestrator | 2026-03-27 00:27:09.370856 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-27 00:27:09.370866 | orchestrator | Friday 27 March 2026 00:26:45 +0000 (0:00:00.310) 0:00:44.954 ********** 2026-03-27 00:27:09.370875 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.370884 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.370894 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.370903 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.370912 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.370922 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.370931 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.370941 | orchestrator | 2026-03-27 00:27:09.370986 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-27 00:27:09.370999 | orchestrator | Friday 27 March 2026 00:26:47 +0000 (0:00:01.980) 0:00:46.935 ********** 2026-03-27 00:27:09.371009 | orchestrator | changed: [testbed-manager] 2026-03-27 00:27:09.371035 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:27:09.371045 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:27:09.371054 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:27:09.371064 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:27:09.371073 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:27:09.371082 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:27:09.371092 | orchestrator | 2026-03-27 00:27:09.371102 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-27 00:27:09.371111 | orchestrator | Friday 27 March 2026 00:26:48 +0000 (0:00:01.256) 0:00:48.191 ********** 2026-03-27 00:27:09.371121 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.371130 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.371140 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.371149 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.371158 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.371168 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.371177 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.371186 | orchestrator | 2026-03-27 00:27:09.371196 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-27 00:27:09.371206 | orchestrator | Friday 27 March 2026 00:26:49 +0000 (0:00:00.898) 0:00:49.090 ********** 2026-03-27 00:27:09.371220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:27:09.371242 | orchestrator | 2026-03-27 00:27:09.371251 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-27 00:27:09.371262 | orchestrator | Friday 27 March 2026 00:26:49 +0000 (0:00:00.305) 0:00:49.395 ********** 2026-03-27 00:27:09.371271 | orchestrator | changed: [testbed-manager] 2026-03-27 00:27:09.371281 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:27:09.371290 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:27:09.371299 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:27:09.371309 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:27:09.371318 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:27:09.371327 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:27:09.371337 | orchestrator | 2026-03-27 00:27:09.371367 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-27 00:27:09.371377 | orchestrator | Friday 27 March 2026 00:26:51 +0000 (0:00:01.177) 0:00:50.573 ********** 2026-03-27 00:27:09.371386 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:27:09.371396 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:27:09.371405 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:27:09.371414 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:27:09.371424 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:27:09.371433 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:27:09.371442 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:27:09.371452 | orchestrator | 2026-03-27 00:27:09.371461 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-27 00:27:09.371471 | orchestrator | Friday 27 March 2026 00:26:51 +0000 (0:00:00.232) 0:00:50.806 ********** 2026-03-27 00:27:09.371481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:27:09.371490 | orchestrator | 2026-03-27 00:27:09.371500 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-27 00:27:09.371509 | orchestrator | Friday 27 March 2026 00:26:51 +0000 (0:00:00.279) 0:00:51.085 ********** 2026-03-27 00:27:09.371518 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.371528 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.371537 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.371547 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.371561 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.371578 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.371594 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.371610 | orchestrator | 2026-03-27 00:27:09.371625 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-27 00:27:09.371642 | orchestrator | Friday 27 March 2026 00:26:53 +0000 (0:00:01.860) 0:00:52.946 ********** 2026-03-27 00:27:09.371658 | orchestrator | changed: [testbed-manager] 2026-03-27 00:27:09.371675 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:27:09.371691 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:27:09.371708 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:27:09.371718 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:27:09.371728 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:27:09.371737 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:27:09.371747 | orchestrator | 2026-03-27 00:27:09.371756 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-27 00:27:09.371766 | orchestrator | Friday 27 March 2026 00:26:54 +0000 (0:00:01.195) 0:00:54.141 ********** 2026-03-27 00:27:09.371775 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:27:09.371785 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:27:09.371794 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:27:09.371804 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:27:09.371819 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:27:09.371835 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:27:09.371866 | orchestrator | changed: [testbed-manager] 2026-03-27 00:27:09.371877 | orchestrator | 2026-03-27 00:27:09.371886 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-27 00:27:09.371896 | orchestrator | Friday 27 March 2026 00:27:06 +0000 (0:00:11.523) 0:01:05.665 ********** 2026-03-27 00:27:09.371906 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.371915 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.371925 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.371934 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.371944 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.372019 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.372030 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.372040 | orchestrator | 2026-03-27 00:27:09.372049 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-27 00:27:09.372059 | orchestrator | Friday 27 March 2026 00:27:07 +0000 (0:00:01.530) 0:01:07.195 ********** 2026-03-27 00:27:09.372069 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.372078 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.372087 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.372097 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.372106 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.372116 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.372125 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.372135 | orchestrator | 2026-03-27 00:27:09.372145 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-27 00:27:09.372154 | orchestrator | Friday 27 March 2026 00:27:08 +0000 (0:00:01.034) 0:01:08.230 ********** 2026-03-27 00:27:09.372164 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.372173 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.372183 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.372192 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.372202 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.372211 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.372228 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.372245 | orchestrator | 2026-03-27 00:27:09.372262 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-27 00:27:09.372279 | orchestrator | Friday 27 March 2026 00:27:08 +0000 (0:00:00.215) 0:01:08.445 ********** 2026-03-27 00:27:09.372293 | orchestrator | ok: [testbed-manager] 2026-03-27 00:27:09.372309 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:27:09.372326 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:27:09.372351 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:27:09.372369 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:27:09.372388 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:27:09.372406 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:27:09.372423 | orchestrator | 2026-03-27 00:27:09.372437 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-27 00:27:09.372454 | orchestrator | Friday 27 March 2026 00:27:09 +0000 (0:00:00.190) 0:01:08.636 ********** 2026-03-27 00:27:09.372469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:27:09.372483 | orchestrator | 2026-03-27 00:27:09.372512 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-27 00:29:28.555085 | orchestrator | Friday 27 March 2026 00:27:09 +0000 (0:00:00.246) 0:01:08.882 ********** 2026-03-27 00:29:28.555183 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:28.555191 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:28.555196 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:28.555200 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:28.555204 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:28.555208 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:28.555212 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:28.555216 | orchestrator | 2026-03-27 00:29:28.555221 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-27 00:29:28.555244 | orchestrator | Friday 27 March 2026 00:27:11 +0000 (0:00:02.373) 0:01:11.256 ********** 2026-03-27 00:29:28.555248 | orchestrator | changed: [testbed-manager] 2026-03-27 00:29:28.555253 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:29:28.555267 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:29:28.555271 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:29:28.555275 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:29:28.555279 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:29:28.555292 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:29:28.555298 | orchestrator | 2026-03-27 00:29:28.555304 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-27 00:29:28.555312 | orchestrator | Friday 27 March 2026 00:27:12 +0000 (0:00:00.631) 0:01:11.887 ********** 2026-03-27 00:29:28.555319 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:28.555324 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:28.555331 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:28.555336 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:28.555340 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:28.555344 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:28.555348 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:28.555351 | orchestrator | 2026-03-27 00:29:28.555355 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-27 00:29:28.555359 | orchestrator | Friday 27 March 2026 00:27:12 +0000 (0:00:00.238) 0:01:12.125 ********** 2026-03-27 00:29:28.555363 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:28.555367 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:28.555370 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:28.555374 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:28.555378 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:28.555381 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:28.555385 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:28.555389 | orchestrator | 2026-03-27 00:29:28.555393 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-27 00:29:28.555397 | orchestrator | Friday 27 March 2026 00:27:14 +0000 (0:00:01.443) 0:01:13.569 ********** 2026-03-27 00:29:28.555400 | orchestrator | changed: [testbed-manager] 2026-03-27 00:29:28.555404 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:29:28.555407 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:29:28.555411 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:29:28.555415 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:29:28.555418 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:29:28.555422 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:29:28.555426 | orchestrator | 2026-03-27 00:29:28.555429 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-27 00:29:28.555433 | orchestrator | Friday 27 March 2026 00:27:16 +0000 (0:00:02.050) 0:01:15.620 ********** 2026-03-27 00:29:28.555437 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:28.555440 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:28.555444 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:28.555448 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:28.555452 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:28.555456 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:28.555462 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:28.555468 | orchestrator | 2026-03-27 00:29:28.555474 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-27 00:29:28.555480 | orchestrator | Friday 27 March 2026 00:27:19 +0000 (0:00:03.605) 0:01:19.225 ********** 2026-03-27 00:29:28.555486 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:28.555492 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:28.555498 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:28.555504 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:28.555510 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:28.555518 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:28.555521 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:28.555525 | orchestrator | 2026-03-27 00:29:28.555529 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-27 00:29:28.555538 | orchestrator | Friday 27 March 2026 00:27:57 +0000 (0:00:37.791) 0:01:57.016 ********** 2026-03-27 00:29:28.555542 | orchestrator | changed: [testbed-manager] 2026-03-27 00:29:28.555545 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:29:28.555549 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:29:28.555553 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:29:28.555556 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:29:28.555560 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:29:28.555563 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:29:28.555567 | orchestrator | 2026-03-27 00:29:28.555571 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-27 00:29:28.555574 | orchestrator | Friday 27 March 2026 00:29:14 +0000 (0:01:17.050) 0:03:14.067 ********** 2026-03-27 00:29:28.555578 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:28.555582 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:28.555585 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:28.555589 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:28.555592 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:28.555607 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:28.555613 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:28.555619 | orchestrator | 2026-03-27 00:29:28.555626 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-27 00:29:28.555632 | orchestrator | Friday 27 March 2026 00:29:16 +0000 (0:00:02.133) 0:03:16.201 ********** 2026-03-27 00:29:28.555638 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:28.555644 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:28.555649 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:28.555655 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:28.555662 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:28.555667 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:28.555674 | orchestrator | changed: [testbed-manager] 2026-03-27 00:29:28.555680 | orchestrator | 2026-03-27 00:29:28.555687 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-27 00:29:28.555694 | orchestrator | Friday 27 March 2026 00:29:27 +0000 (0:00:10.921) 0:03:27.123 ********** 2026-03-27 00:29:28.555724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-27 00:29:28.555736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-27 00:29:28.555746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-27 00:29:28.555755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-27 00:29:28.555769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-27 00:29:28.555776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-27 00:29:28.555786 | orchestrator | 2026-03-27 00:29:28.555793 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-27 00:29:28.555800 | orchestrator | Friday 27 March 2026 00:29:27 +0000 (0:00:00.307) 0:03:27.430 ********** 2026-03-27 00:29:28.555807 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-27 00:29:28.555813 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:29:28.555819 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-27 00:29:28.555826 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:29:28.555832 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-27 00:29:28.555838 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:29:28.555845 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-27 00:29:28.555851 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:29:28.555857 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-27 00:29:28.555864 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-27 00:29:28.555870 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-27 00:29:28.555876 | orchestrator | 2026-03-27 00:29:28.555882 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-27 00:29:28.555889 | orchestrator | Friday 27 March 2026 00:29:28 +0000 (0:00:00.585) 0:03:28.015 ********** 2026-03-27 00:29:28.555902 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-27 00:29:28.555909 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-27 00:29:28.555916 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-27 00:29:28.555921 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-27 00:29:28.555927 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-27 00:29:28.556005 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-27 00:29:37.507854 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-27 00:29:37.508105 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-27 00:29:37.508132 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-27 00:29:37.508150 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-27 00:29:37.508171 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:29:37.508190 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-27 00:29:37.508208 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-27 00:29:37.508226 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-27 00:29:37.508286 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-27 00:29:37.508307 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-27 00:29:37.508327 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-27 00:29:37.508345 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-27 00:29:37.508365 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-27 00:29:37.508385 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-27 00:29:37.508406 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-27 00:29:37.508426 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-27 00:29:37.508447 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-27 00:29:37.508467 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-27 00:29:37.508488 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:29:37.508508 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-27 00:29:37.508529 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-27 00:29:37.508548 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-27 00:29:37.508567 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-27 00:29:37.508587 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-27 00:29:37.508605 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-27 00:29:37.508624 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-27 00:29:37.508642 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-27 00:29:37.508662 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-27 00:29:37.508681 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:29:37.508699 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-27 00:29:37.508717 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-27 00:29:37.508734 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-27 00:29:37.508752 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-27 00:29:37.508769 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-27 00:29:37.508786 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-27 00:29:37.508805 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-27 00:29:37.508847 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-27 00:29:37.508866 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:29:37.508885 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-27 00:29:37.508903 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-27 00:29:37.508922 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-27 00:29:37.508985 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-27 00:29:37.509006 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-27 00:29:37.509054 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-27 00:29:37.509075 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-27 00:29:37.509094 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-27 00:29:37.509113 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-27 00:29:37.509131 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-27 00:29:37.509149 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-27 00:29:37.509168 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-27 00:29:37.509186 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-27 00:29:37.509205 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-27 00:29:37.509224 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-27 00:29:37.509243 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-27 00:29:37.509263 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-27 00:29:37.509281 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-27 00:29:37.509296 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-27 00:29:37.509306 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-27 00:29:37.509317 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-27 00:29:37.509328 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-27 00:29:37.509338 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-27 00:29:37.509349 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-27 00:29:37.509359 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-27 00:29:37.509370 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-27 00:29:37.509380 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-27 00:29:37.509391 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-27 00:29:37.509401 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-27 00:29:37.509412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-27 00:29:37.509423 | orchestrator | 2026-03-27 00:29:37.509436 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-27 00:29:37.509446 | orchestrator | Friday 27 March 2026 00:29:35 +0000 (0:00:06.775) 0:03:34.790 ********** 2026-03-27 00:29:37.509457 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-27 00:29:37.509468 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-27 00:29:37.509478 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-27 00:29:37.509489 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-27 00:29:37.509511 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-27 00:29:37.509521 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-27 00:29:37.509532 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-27 00:29:37.509542 | orchestrator | 2026-03-27 00:29:37.509553 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-27 00:29:37.509563 | orchestrator | Friday 27 March 2026 00:29:36 +0000 (0:00:01.561) 0:03:36.352 ********** 2026-03-27 00:29:37.509574 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-27 00:29:37.509584 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:29:37.509603 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-27 00:29:37.509614 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:29:37.509624 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-27 00:29:37.509635 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-27 00:29:37.509645 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:29:37.509656 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:29:37.509667 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-27 00:29:37.509677 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-27 00:29:37.509697 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-27 00:29:51.715620 | orchestrator | 2026-03-27 00:29:51.715715 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-27 00:29:51.715726 | orchestrator | Friday 27 March 2026 00:29:37 +0000 (0:00:00.706) 0:03:37.058 ********** 2026-03-27 00:29:51.715735 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-27 00:29:51.715743 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:29:51.715751 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-27 00:29:51.715759 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-27 00:29:51.715766 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:29:51.715773 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:29:51.715781 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-27 00:29:51.715788 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:29:51.715795 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-27 00:29:51.715802 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-27 00:29:51.715809 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-27 00:29:51.715816 | orchestrator | 2026-03-27 00:29:51.715823 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-27 00:29:51.715831 | orchestrator | Friday 27 March 2026 00:29:39 +0000 (0:00:01.487) 0:03:38.546 ********** 2026-03-27 00:29:51.715838 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-27 00:29:51.715845 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:29:51.715852 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-27 00:29:51.715861 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-27 00:29:51.715873 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:29:51.715911 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:29:51.715926 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-27 00:29:51.715960 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:29:51.715973 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-27 00:29:51.715984 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-27 00:29:51.715996 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-27 00:29:51.716007 | orchestrator | 2026-03-27 00:29:51.716018 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-27 00:29:51.716030 | orchestrator | Friday 27 March 2026 00:29:40 +0000 (0:00:01.595) 0:03:40.141 ********** 2026-03-27 00:29:51.716042 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:29:51.716053 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:29:51.716064 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:29:51.716076 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:29:51.716086 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:29:51.716098 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:29:51.716110 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:29:51.716121 | orchestrator | 2026-03-27 00:29:51.716134 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-27 00:29:51.716151 | orchestrator | Friday 27 March 2026 00:29:40 +0000 (0:00:00.229) 0:03:40.371 ********** 2026-03-27 00:29:51.716168 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:51.716181 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:51.716203 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:51.716219 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:51.716233 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:51.716247 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:51.716262 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:51.716277 | orchestrator | 2026-03-27 00:29:51.716292 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-27 00:29:51.716307 | orchestrator | Friday 27 March 2026 00:29:46 +0000 (0:00:05.459) 0:03:45.831 ********** 2026-03-27 00:29:51.716322 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-27 00:29:51.716337 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-27 00:29:51.716352 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:29:51.716367 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:29:51.716383 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-27 00:29:51.716398 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:29:51.716412 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-27 00:29:51.716426 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-27 00:29:51.716442 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:29:51.716456 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-27 00:29:51.716471 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:29:51.716484 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:29:51.716499 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-27 00:29:51.716511 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:29:51.716522 | orchestrator | 2026-03-27 00:29:51.716533 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-27 00:29:51.716545 | orchestrator | Friday 27 March 2026 00:29:46 +0000 (0:00:00.285) 0:03:46.117 ********** 2026-03-27 00:29:51.716556 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-27 00:29:51.716568 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-27 00:29:51.716578 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-27 00:29:51.716611 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-27 00:29:51.716623 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-27 00:29:51.716635 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-27 00:29:51.716659 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-27 00:29:51.716671 | orchestrator | 2026-03-27 00:29:51.716683 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-27 00:29:51.716695 | orchestrator | Friday 27 March 2026 00:29:47 +0000 (0:00:01.047) 0:03:47.164 ********** 2026-03-27 00:29:51.716708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:29:51.716722 | orchestrator | 2026-03-27 00:29:51.716733 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-27 00:29:51.716745 | orchestrator | Friday 27 March 2026 00:29:48 +0000 (0:00:00.376) 0:03:47.540 ********** 2026-03-27 00:29:51.716757 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:51.716768 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:51.716779 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:51.716790 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:51.716801 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:51.716811 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:51.716822 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:51.716833 | orchestrator | 2026-03-27 00:29:51.716844 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-27 00:29:51.716856 | orchestrator | Friday 27 March 2026 00:29:49 +0000 (0:00:01.282) 0:03:48.823 ********** 2026-03-27 00:29:51.716867 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:51.716877 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:51.716888 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:51.716898 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:51.716909 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:51.716921 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:51.716956 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:51.716967 | orchestrator | 2026-03-27 00:29:51.716978 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-27 00:29:51.716989 | orchestrator | Friday 27 March 2026 00:29:49 +0000 (0:00:00.583) 0:03:49.406 ********** 2026-03-27 00:29:51.717000 | orchestrator | changed: [testbed-manager] 2026-03-27 00:29:51.717011 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:29:51.717021 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:29:51.717033 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:29:51.717044 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:29:51.717055 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:29:51.717066 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:29:51.717078 | orchestrator | 2026-03-27 00:29:51.717091 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-27 00:29:51.717102 | orchestrator | Friday 27 March 2026 00:29:50 +0000 (0:00:00.622) 0:03:50.028 ********** 2026-03-27 00:29:51.717113 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:51.717139 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:51.717153 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:51.717164 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:51.717175 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:51.717188 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:51.717200 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:51.717211 | orchestrator | 2026-03-27 00:29:51.717223 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-27 00:29:51.717235 | orchestrator | Friday 27 March 2026 00:29:51 +0000 (0:00:00.679) 0:03:50.708 ********** 2026-03-27 00:29:51.717252 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774569924.2587967, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:51.717286 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774569952.8236816, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:51.717300 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774569930.5572433, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:51.717342 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774569939.9535809, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116761 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774569926.7616012, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116843 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774569957.7096572, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116850 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774569953.042601, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116855 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116877 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116893 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116898 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116913 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116918 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116923 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 00:29:57.116954 | orchestrator | 2026-03-27 00:29:57.116960 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-27 00:29:57.116966 | orchestrator | Friday 27 March 2026 00:29:52 +0000 (0:00:00.933) 0:03:51.641 ********** 2026-03-27 00:29:57.116971 | orchestrator | changed: [testbed-manager] 2026-03-27 00:29:57.116976 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:29:57.116981 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:29:57.116990 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:29:57.116994 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:29:57.116999 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:29:57.117003 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:29:57.117007 | orchestrator | 2026-03-27 00:29:57.117012 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-27 00:29:57.117016 | orchestrator | Friday 27 March 2026 00:29:53 +0000 (0:00:01.116) 0:03:52.758 ********** 2026-03-27 00:29:57.117020 | orchestrator | changed: [testbed-manager] 2026-03-27 00:29:57.117024 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:29:57.117029 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:29:57.117033 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:29:57.117037 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:29:57.117041 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:29:57.117046 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:29:57.117050 | orchestrator | 2026-03-27 00:29:57.117054 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-27 00:29:57.117058 | orchestrator | Friday 27 March 2026 00:29:54 +0000 (0:00:01.188) 0:03:53.946 ********** 2026-03-27 00:29:57.117063 | orchestrator | changed: [testbed-manager] 2026-03-27 00:29:57.117067 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:29:57.117071 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:29:57.117075 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:29:57.117080 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:29:57.117084 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:29:57.117088 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:29:57.117092 | orchestrator | 2026-03-27 00:29:57.117097 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-27 00:29:57.117104 | orchestrator | Friday 27 March 2026 00:29:55 +0000 (0:00:01.285) 0:03:55.232 ********** 2026-03-27 00:29:57.117109 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:29:57.117113 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:29:57.117118 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:29:57.117122 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:29:57.117126 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:29:57.117130 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:29:57.117134 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:29:57.117139 | orchestrator | 2026-03-27 00:29:57.117143 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-27 00:29:57.117147 | orchestrator | Friday 27 March 2026 00:29:55 +0000 (0:00:00.238) 0:03:55.471 ********** 2026-03-27 00:29:57.117152 | orchestrator | ok: [testbed-manager] 2026-03-27 00:29:57.117157 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:29:57.117161 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:29:57.117166 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:29:57.117170 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:29:57.117174 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:29:57.117178 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:29:57.117183 | orchestrator | 2026-03-27 00:29:57.117187 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-27 00:29:57.117191 | orchestrator | Friday 27 March 2026 00:29:56 +0000 (0:00:00.737) 0:03:56.208 ********** 2026-03-27 00:29:57.117198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:29:57.117204 | orchestrator | 2026-03-27 00:29:57.117209 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-27 00:29:57.117216 | orchestrator | Friday 27 March 2026 00:29:57 +0000 (0:00:00.425) 0:03:56.634 ********** 2026-03-27 00:31:16.552078 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:16.552198 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:31:16.552215 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:31:16.552226 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:31:16.552264 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:31:16.552276 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:31:16.552286 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:31:16.552297 | orchestrator | 2026-03-27 00:31:16.552310 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-27 00:31:16.552323 | orchestrator | Friday 27 March 2026 00:30:06 +0000 (0:00:09.091) 0:04:05.725 ********** 2026-03-27 00:31:16.552334 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:16.552345 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:16.552356 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:16.552366 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:16.552377 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:16.552388 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:16.552398 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:16.552409 | orchestrator | 2026-03-27 00:31:16.552420 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-27 00:31:16.552431 | orchestrator | Friday 27 March 2026 00:30:07 +0000 (0:00:01.134) 0:04:06.860 ********** 2026-03-27 00:31:16.552442 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:16.552453 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:16.552463 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:16.552474 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:16.552484 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:16.552495 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:16.552506 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:16.552516 | orchestrator | 2026-03-27 00:31:16.552527 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-27 00:31:16.552538 | orchestrator | Friday 27 March 2026 00:30:08 +0000 (0:00:00.958) 0:04:07.818 ********** 2026-03-27 00:31:16.552549 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:16.552559 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:16.552570 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:16.552581 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:16.552591 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:16.552604 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:16.552617 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:16.552629 | orchestrator | 2026-03-27 00:31:16.552642 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-27 00:31:16.552655 | orchestrator | Friday 27 March 2026 00:30:08 +0000 (0:00:00.283) 0:04:08.102 ********** 2026-03-27 00:31:16.552667 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:16.552679 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:16.552691 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:16.552703 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:16.552715 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:16.552727 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:16.552740 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:16.552752 | orchestrator | 2026-03-27 00:31:16.552764 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-27 00:31:16.552777 | orchestrator | Friday 27 March 2026 00:30:08 +0000 (0:00:00.272) 0:04:08.374 ********** 2026-03-27 00:31:16.552789 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:16.552801 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:16.552813 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:16.552825 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:16.552838 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:16.552850 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:16.552862 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:16.552874 | orchestrator | 2026-03-27 00:31:16.552886 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-27 00:31:16.552923 | orchestrator | Friday 27 March 2026 00:30:09 +0000 (0:00:00.274) 0:04:08.648 ********** 2026-03-27 00:31:16.552937 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:16.552950 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:16.552962 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:16.552985 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:16.552996 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:16.553007 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:16.553018 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:16.553028 | orchestrator | 2026-03-27 00:31:16.553039 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-27 00:31:16.553050 | orchestrator | Friday 27 March 2026 00:30:14 +0000 (0:00:04.885) 0:04:13.534 ********** 2026-03-27 00:31:16.553063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:31:16.553076 | orchestrator | 2026-03-27 00:31:16.553087 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-27 00:31:16.553098 | orchestrator | Friday 27 March 2026 00:30:14 +0000 (0:00:00.363) 0:04:13.898 ********** 2026-03-27 00:31:16.553116 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-27 00:31:16.553140 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-27 00:31:16.553170 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:31:16.553189 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-27 00:31:16.553208 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-27 00:31:16.553226 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-27 00:31:16.553246 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-27 00:31:16.553267 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:31:16.553286 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-27 00:31:16.553306 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:31:16.553326 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-27 00:31:16.553343 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-27 00:31:16.553354 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:31:16.553365 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-27 00:31:16.553375 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-27 00:31:16.553386 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:31:16.553415 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-27 00:31:16.553429 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:31:16.553448 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-27 00:31:16.553465 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-27 00:31:16.553483 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:31:16.553512 | orchestrator | 2026-03-27 00:31:16.553559 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-27 00:31:16.553578 | orchestrator | Friday 27 March 2026 00:30:14 +0000 (0:00:00.314) 0:04:14.212 ********** 2026-03-27 00:31:16.553596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:31:16.553614 | orchestrator | 2026-03-27 00:31:16.553626 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-27 00:31:16.553637 | orchestrator | Friday 27 March 2026 00:30:15 +0000 (0:00:00.474) 0:04:14.686 ********** 2026-03-27 00:31:16.553648 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-27 00:31:16.553659 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:31:16.553669 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-27 00:31:16.553681 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-27 00:31:16.553691 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:31:16.553702 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-27 00:31:16.553724 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:31:16.553735 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-27 00:31:16.553746 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:31:16.553757 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-27 00:31:16.553767 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:31:16.553778 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:31:16.553789 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-27 00:31:16.553800 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:31:16.553810 | orchestrator | 2026-03-27 00:31:16.553821 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-27 00:31:16.553832 | orchestrator | Friday 27 March 2026 00:30:15 +0000 (0:00:00.293) 0:04:14.980 ********** 2026-03-27 00:31:16.553860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:31:16.553872 | orchestrator | 2026-03-27 00:31:16.553883 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-27 00:31:16.553894 | orchestrator | Friday 27 March 2026 00:30:15 +0000 (0:00:00.392) 0:04:15.372 ********** 2026-03-27 00:31:16.553969 | orchestrator | changed: [testbed-manager] 2026-03-27 00:31:16.553981 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:31:16.553992 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:31:16.554002 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:31:16.554075 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:31:16.554091 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:31:16.554102 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:31:16.554112 | orchestrator | 2026-03-27 00:31:16.554123 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-27 00:31:16.554134 | orchestrator | Friday 27 March 2026 00:30:49 +0000 (0:00:34.020) 0:04:49.393 ********** 2026-03-27 00:31:16.554145 | orchestrator | changed: [testbed-manager] 2026-03-27 00:31:16.554156 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:31:16.554166 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:31:16.554177 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:31:16.554188 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:31:16.554198 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:31:16.554216 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:31:16.554227 | orchestrator | 2026-03-27 00:31:16.554238 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-27 00:31:16.554249 | orchestrator | Friday 27 March 2026 00:30:58 +0000 (0:00:09.024) 0:04:58.418 ********** 2026-03-27 00:31:16.554260 | orchestrator | changed: [testbed-manager] 2026-03-27 00:31:16.554270 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:31:16.554290 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:31:16.554311 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:31:16.554331 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:31:16.554356 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:31:16.554381 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:31:16.554401 | orchestrator | 2026-03-27 00:31:16.554422 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-27 00:31:16.554475 | orchestrator | Friday 27 March 2026 00:31:07 +0000 (0:00:08.654) 0:05:07.073 ********** 2026-03-27 00:31:16.554494 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:16.554505 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:16.554516 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:16.554527 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:16.554537 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:16.554548 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:16.554559 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:16.554578 | orchestrator | 2026-03-27 00:31:16.554597 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-27 00:31:16.554637 | orchestrator | Friday 27 March 2026 00:31:09 +0000 (0:00:01.927) 0:05:09.001 ********** 2026-03-27 00:31:16.554660 | orchestrator | changed: [testbed-manager] 2026-03-27 00:31:16.554677 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:31:16.554693 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:31:16.554712 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:31:16.554730 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:31:16.554748 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:31:16.554767 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:31:16.554785 | orchestrator | 2026-03-27 00:31:16.554822 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-27 00:31:27.752316 | orchestrator | Friday 27 March 2026 00:31:16 +0000 (0:00:07.062) 0:05:16.063 ********** 2026-03-27 00:31:27.752412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:31:27.752424 | orchestrator | 2026-03-27 00:31:27.752433 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-27 00:31:27.752442 | orchestrator | Friday 27 March 2026 00:31:16 +0000 (0:00:00.390) 0:05:16.454 ********** 2026-03-27 00:31:27.752450 | orchestrator | changed: [testbed-manager] 2026-03-27 00:31:27.752458 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:31:27.752466 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:31:27.752473 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:31:27.752481 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:31:27.752488 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:31:27.752495 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:31:27.752503 | orchestrator | 2026-03-27 00:31:27.752510 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-27 00:31:27.752517 | orchestrator | Friday 27 March 2026 00:31:17 +0000 (0:00:00.848) 0:05:17.302 ********** 2026-03-27 00:31:27.752524 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:27.752533 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:27.752540 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:27.752547 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:27.752555 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:27.752562 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:27.752569 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:27.752576 | orchestrator | 2026-03-27 00:31:27.752583 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-27 00:31:27.752591 | orchestrator | Friday 27 March 2026 00:31:19 +0000 (0:00:01.996) 0:05:19.299 ********** 2026-03-27 00:31:27.752598 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:31:27.752605 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:31:27.752612 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:31:27.752619 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:31:27.752626 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:31:27.752633 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:31:27.752640 | orchestrator | changed: [testbed-manager] 2026-03-27 00:31:27.752648 | orchestrator | 2026-03-27 00:31:27.752655 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-27 00:31:27.752662 | orchestrator | Friday 27 March 2026 00:31:20 +0000 (0:00:00.815) 0:05:20.115 ********** 2026-03-27 00:31:27.752669 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:31:27.752676 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:31:27.752684 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:31:27.752691 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:31:27.752698 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:31:27.752705 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:31:27.752712 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:31:27.752719 | orchestrator | 2026-03-27 00:31:27.752726 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-27 00:31:27.752734 | orchestrator | Friday 27 March 2026 00:31:20 +0000 (0:00:00.267) 0:05:20.383 ********** 2026-03-27 00:31:27.752762 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:31:27.752769 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:31:27.752776 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:31:27.752783 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:31:27.752790 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:31:27.752797 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:31:27.752804 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:31:27.752811 | orchestrator | 2026-03-27 00:31:27.752818 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-27 00:31:27.752826 | orchestrator | Friday 27 March 2026 00:31:21 +0000 (0:00:00.402) 0:05:20.785 ********** 2026-03-27 00:31:27.752833 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:27.752840 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:27.752847 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:27.752854 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:27.752861 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:27.752881 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:27.752929 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:27.752943 | orchestrator | 2026-03-27 00:31:27.752956 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-27 00:31:27.752969 | orchestrator | Friday 27 March 2026 00:31:21 +0000 (0:00:00.380) 0:05:21.166 ********** 2026-03-27 00:31:27.752981 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:31:27.752994 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:31:27.753002 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:31:27.753014 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:31:27.753026 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:31:27.753038 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:31:27.753049 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:31:27.753061 | orchestrator | 2026-03-27 00:31:27.753073 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-27 00:31:27.753086 | orchestrator | Friday 27 March 2026 00:31:21 +0000 (0:00:00.238) 0:05:21.404 ********** 2026-03-27 00:31:27.753098 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:27.753111 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:27.753124 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:27.753136 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:27.753148 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:27.753160 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:27.753168 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:27.753175 | orchestrator | 2026-03-27 00:31:27.753184 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-27 00:31:27.753192 | orchestrator | Friday 27 March 2026 00:31:22 +0000 (0:00:00.319) 0:05:21.724 ********** 2026-03-27 00:31:27.753200 | orchestrator | ok: [testbed-manager] =>  2026-03-27 00:31:27.753208 | orchestrator |  docker_version: 5:27.5.1 2026-03-27 00:31:27.753216 | orchestrator | ok: [testbed-node-0] =>  2026-03-27 00:31:27.753224 | orchestrator |  docker_version: 5:27.5.1 2026-03-27 00:31:27.753231 | orchestrator | ok: [testbed-node-1] =>  2026-03-27 00:31:27.753238 | orchestrator |  docker_version: 5:27.5.1 2026-03-27 00:31:27.753245 | orchestrator | ok: [testbed-node-2] =>  2026-03-27 00:31:27.753252 | orchestrator |  docker_version: 5:27.5.1 2026-03-27 00:31:27.753274 | orchestrator | ok: [testbed-node-3] =>  2026-03-27 00:31:27.753282 | orchestrator |  docker_version: 5:27.5.1 2026-03-27 00:31:27.753289 | orchestrator | ok: [testbed-node-4] =>  2026-03-27 00:31:27.753296 | orchestrator |  docker_version: 5:27.5.1 2026-03-27 00:31:27.753303 | orchestrator | ok: [testbed-node-5] =>  2026-03-27 00:31:27.753310 | orchestrator |  docker_version: 5:27.5.1 2026-03-27 00:31:27.753317 | orchestrator | 2026-03-27 00:31:27.753324 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-27 00:31:27.753331 | orchestrator | Friday 27 March 2026 00:31:22 +0000 (0:00:00.240) 0:05:21.964 ********** 2026-03-27 00:31:27.753338 | orchestrator | ok: [testbed-manager] =>  2026-03-27 00:31:27.753353 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-27 00:31:27.753361 | orchestrator | ok: [testbed-node-0] =>  2026-03-27 00:31:27.753368 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-27 00:31:27.753375 | orchestrator | ok: [testbed-node-1] =>  2026-03-27 00:31:27.753382 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-27 00:31:27.753388 | orchestrator | ok: [testbed-node-2] =>  2026-03-27 00:31:27.753395 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-27 00:31:27.753402 | orchestrator | ok: [testbed-node-3] =>  2026-03-27 00:31:27.753409 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-27 00:31:27.753416 | orchestrator | ok: [testbed-node-4] =>  2026-03-27 00:31:27.753423 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-27 00:31:27.753430 | orchestrator | ok: [testbed-node-5] =>  2026-03-27 00:31:27.753437 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-27 00:31:27.753444 | orchestrator | 2026-03-27 00:31:27.753452 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-27 00:31:27.753459 | orchestrator | Friday 27 March 2026 00:31:22 +0000 (0:00:00.268) 0:05:22.232 ********** 2026-03-27 00:31:27.753466 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:31:27.753473 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:31:27.753480 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:31:27.753487 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:31:27.753494 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:31:27.753501 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:31:27.753508 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:31:27.753515 | orchestrator | 2026-03-27 00:31:27.753522 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-27 00:31:27.753529 | orchestrator | Friday 27 March 2026 00:31:22 +0000 (0:00:00.243) 0:05:22.476 ********** 2026-03-27 00:31:27.753536 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:31:27.753543 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:31:27.753550 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:31:27.753557 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:31:27.753564 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:31:27.753571 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:31:27.753578 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:31:27.753585 | orchestrator | 2026-03-27 00:31:27.753592 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-27 00:31:27.753599 | orchestrator | Friday 27 March 2026 00:31:23 +0000 (0:00:00.250) 0:05:22.726 ********** 2026-03-27 00:31:27.753608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:31:27.753617 | orchestrator | 2026-03-27 00:31:27.753624 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-27 00:31:27.753631 | orchestrator | Friday 27 March 2026 00:31:23 +0000 (0:00:00.406) 0:05:23.133 ********** 2026-03-27 00:31:27.753638 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:27.753645 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:27.753652 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:27.753659 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:27.753668 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:27.753680 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:27.753691 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:27.753703 | orchestrator | 2026-03-27 00:31:27.753715 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-27 00:31:27.753725 | orchestrator | Friday 27 March 2026 00:31:24 +0000 (0:00:00.829) 0:05:23.962 ********** 2026-03-27 00:31:27.753743 | orchestrator | ok: [testbed-manager] 2026-03-27 00:31:27.753756 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:31:27.753766 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:31:27.753777 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:31:27.753788 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:31:27.753807 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:31:27.753819 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:31:27.753831 | orchestrator | 2026-03-27 00:31:27.753843 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-27 00:31:27.753858 | orchestrator | Friday 27 March 2026 00:31:27 +0000 (0:00:02.974) 0:05:26.937 ********** 2026-03-27 00:31:27.753869 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-27 00:31:27.753883 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-27 00:31:27.753954 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-27 00:31:27.753967 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-27 00:31:27.753979 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-27 00:31:27.753991 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-27 00:31:27.754003 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:31:27.754067 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-27 00:31:27.754077 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-27 00:31:27.754084 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-27 00:31:27.754091 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:31:27.754098 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-27 00:31:27.754105 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-27 00:31:27.754112 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-27 00:31:27.754119 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:31:27.754126 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-27 00:31:27.754143 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-27 00:32:34.353146 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-27 00:32:34.353273 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:32:34.353293 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-27 00:32:34.353310 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-27 00:32:34.353325 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-27 00:32:34.353340 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:32:34.353354 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:32:34.353370 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-27 00:32:34.353384 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-27 00:32:34.353399 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-27 00:32:34.353414 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:32:34.353429 | orchestrator | 2026-03-27 00:32:34.353445 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-27 00:32:34.353462 | orchestrator | Friday 27 March 2026 00:31:27 +0000 (0:00:00.552) 0:05:27.489 ********** 2026-03-27 00:32:34.353477 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:34.353492 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.353507 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.353521 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.353536 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.353550 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.353565 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.353580 | orchestrator | 2026-03-27 00:32:34.353595 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-27 00:32:34.353609 | orchestrator | Friday 27 March 2026 00:31:35 +0000 (0:00:07.586) 0:05:35.076 ********** 2026-03-27 00:32:34.353624 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:34.353652 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.353671 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.353693 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.353714 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.353729 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.353773 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.353789 | orchestrator | 2026-03-27 00:32:34.353804 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-27 00:32:34.353818 | orchestrator | Friday 27 March 2026 00:31:36 +0000 (0:00:01.079) 0:05:36.156 ********** 2026-03-27 00:32:34.353855 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:34.353870 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.353884 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.353898 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.353912 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.353926 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.353936 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.353945 | orchestrator | 2026-03-27 00:32:34.353954 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-27 00:32:34.353963 | orchestrator | Friday 27 March 2026 00:31:45 +0000 (0:00:09.253) 0:05:45.410 ********** 2026-03-27 00:32:34.353973 | orchestrator | changed: [testbed-manager] 2026-03-27 00:32:34.353982 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.353991 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.354000 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.354007 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.354064 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.354073 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.354080 | orchestrator | 2026-03-27 00:32:34.354089 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-27 00:32:34.354097 | orchestrator | Friday 27 March 2026 00:31:50 +0000 (0:00:04.125) 0:05:49.536 ********** 2026-03-27 00:32:34.354104 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:34.354112 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.354120 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.354127 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.354135 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.354143 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.354150 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.354158 | orchestrator | 2026-03-27 00:32:34.354180 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-27 00:32:34.354189 | orchestrator | Friday 27 March 2026 00:31:51 +0000 (0:00:01.533) 0:05:51.069 ********** 2026-03-27 00:32:34.354196 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:34.354204 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.354212 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.354220 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.354227 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.354235 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.354243 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.354250 | orchestrator | 2026-03-27 00:32:34.354258 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-27 00:32:34.354266 | orchestrator | Friday 27 March 2026 00:31:52 +0000 (0:00:01.321) 0:05:52.391 ********** 2026-03-27 00:32:34.354274 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:32:34.354282 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:32:34.354290 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:32:34.354298 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:32:34.354305 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:32:34.354313 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:32:34.354321 | orchestrator | changed: [testbed-manager] 2026-03-27 00:32:34.354329 | orchestrator | 2026-03-27 00:32:34.354337 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-27 00:32:34.354345 | orchestrator | Friday 27 March 2026 00:31:53 +0000 (0:00:00.576) 0:05:52.967 ********** 2026-03-27 00:32:34.354353 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:34.354360 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.354368 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.354384 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.354392 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.354400 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.354407 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.354415 | orchestrator | 2026-03-27 00:32:34.354423 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-27 00:32:34.354449 | orchestrator | Friday 27 March 2026 00:32:04 +0000 (0:00:10.672) 0:06:03.640 ********** 2026-03-27 00:32:34.354457 | orchestrator | changed: [testbed-manager] 2026-03-27 00:32:34.354465 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.354473 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.354481 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.354488 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.354496 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.354504 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.354511 | orchestrator | 2026-03-27 00:32:34.354519 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-27 00:32:34.354527 | orchestrator | Friday 27 March 2026 00:32:05 +0000 (0:00:01.113) 0:06:04.753 ********** 2026-03-27 00:32:34.354535 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:34.354543 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.354550 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.354558 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.354566 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.354574 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.354581 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.354589 | orchestrator | 2026-03-27 00:32:34.354597 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-27 00:32:34.354605 | orchestrator | Friday 27 March 2026 00:32:15 +0000 (0:00:09.942) 0:06:14.695 ********** 2026-03-27 00:32:34.354613 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:34.354623 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.354636 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.354650 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.354664 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.354678 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.354693 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.354707 | orchestrator | 2026-03-27 00:32:34.354722 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-27 00:32:34.354736 | orchestrator | Friday 27 March 2026 00:32:27 +0000 (0:00:12.339) 0:06:27.035 ********** 2026-03-27 00:32:34.354744 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-27 00:32:34.354752 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-27 00:32:34.354760 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-27 00:32:34.354767 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-27 00:32:34.354775 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-27 00:32:34.354783 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-27 00:32:34.354791 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-27 00:32:34.354798 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-27 00:32:34.354806 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-27 00:32:34.354814 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-27 00:32:34.354821 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-27 00:32:34.354849 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-27 00:32:34.354857 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-27 00:32:34.354865 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-27 00:32:34.354872 | orchestrator | 2026-03-27 00:32:34.354880 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-27 00:32:34.354888 | orchestrator | Friday 27 March 2026 00:32:28 +0000 (0:00:01.263) 0:06:28.298 ********** 2026-03-27 00:32:34.354904 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:32:34.354912 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:32:34.354920 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:32:34.354927 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:32:34.354935 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:32:34.354943 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:32:34.354950 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:32:34.354958 | orchestrator | 2026-03-27 00:32:34.354966 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-27 00:32:34.354974 | orchestrator | Friday 27 March 2026 00:32:29 +0000 (0:00:00.669) 0:06:28.968 ********** 2026-03-27 00:32:34.354981 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:34.354989 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:34.354997 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:34.355005 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:34.355013 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:34.355021 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:34.355028 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:34.355036 | orchestrator | 2026-03-27 00:32:34.355044 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-27 00:32:34.355053 | orchestrator | Friday 27 March 2026 00:32:33 +0000 (0:00:04.180) 0:06:33.149 ********** 2026-03-27 00:32:34.355061 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:32:34.355069 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:32:34.355077 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:32:34.355084 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:32:34.355092 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:32:34.355100 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:32:34.355107 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:32:34.355115 | orchestrator | 2026-03-27 00:32:34.355124 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-27 00:32:34.355132 | orchestrator | Friday 27 March 2026 00:32:34 +0000 (0:00:00.476) 0:06:33.625 ********** 2026-03-27 00:32:34.355140 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-27 00:32:34.355147 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-27 00:32:34.355155 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:32:34.355163 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-27 00:32:34.355170 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-27 00:32:34.355178 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:32:34.355186 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-27 00:32:34.355194 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-27 00:32:34.355201 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:32:34.355216 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-27 00:32:53.961175 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-27 00:32:53.961291 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:32:53.961307 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-27 00:32:53.961318 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-27 00:32:53.961382 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:32:53.961396 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-27 00:32:53.961406 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-27 00:32:53.961417 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:32:53.961428 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-27 00:32:53.961439 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-27 00:32:53.961449 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:32:53.961461 | orchestrator | 2026-03-27 00:32:53.961473 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-27 00:32:53.961508 | orchestrator | Friday 27 March 2026 00:32:34 +0000 (0:00:00.507) 0:06:34.133 ********** 2026-03-27 00:32:53.961520 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:32:53.961530 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:32:53.961541 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:32:53.961551 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:32:53.961562 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:32:53.961572 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:32:53.961582 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:32:53.961593 | orchestrator | 2026-03-27 00:32:53.961606 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-27 00:32:53.961625 | orchestrator | Friday 27 March 2026 00:32:35 +0000 (0:00:00.485) 0:06:34.618 ********** 2026-03-27 00:32:53.961643 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:32:53.961659 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:32:53.961676 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:32:53.961695 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:32:53.961715 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:32:53.961733 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:32:53.961753 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:32:53.961775 | orchestrator | 2026-03-27 00:32:53.961789 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-27 00:32:53.961837 | orchestrator | Friday 27 March 2026 00:32:35 +0000 (0:00:00.631) 0:06:35.249 ********** 2026-03-27 00:32:53.961850 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:32:53.961860 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:32:53.961871 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:32:53.961881 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:32:53.961892 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:32:53.961902 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:32:53.961913 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:32:53.961923 | orchestrator | 2026-03-27 00:32:53.961934 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-27 00:32:53.961945 | orchestrator | Friday 27 March 2026 00:32:36 +0000 (0:00:00.497) 0:06:35.747 ********** 2026-03-27 00:32:53.961956 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:53.961967 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:32:53.961977 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:32:53.962017 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:32:53.962028 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:32:53.962039 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:32:53.962049 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:32:53.962064 | orchestrator | 2026-03-27 00:32:53.962134 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-27 00:32:53.962145 | orchestrator | Friday 27 March 2026 00:32:38 +0000 (0:00:01.919) 0:06:37.667 ********** 2026-03-27 00:32:53.962157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:32:53.962170 | orchestrator | 2026-03-27 00:32:53.962197 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-27 00:32:53.962208 | orchestrator | Friday 27 March 2026 00:32:38 +0000 (0:00:00.802) 0:06:38.469 ********** 2026-03-27 00:32:53.962219 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:53.962236 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:53.962258 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:53.962288 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:53.962304 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:53.962320 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:53.962337 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:53.962354 | orchestrator | 2026-03-27 00:32:53.962371 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-27 00:32:53.962402 | orchestrator | Friday 27 March 2026 00:32:39 +0000 (0:00:01.028) 0:06:39.498 ********** 2026-03-27 00:32:53.962421 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:53.962438 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:53.962455 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:53.962474 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:53.962492 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:53.962512 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:53.962529 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:53.962547 | orchestrator | 2026-03-27 00:32:53.962567 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-27 00:32:53.962585 | orchestrator | Friday 27 March 2026 00:32:40 +0000 (0:00:00.858) 0:06:40.356 ********** 2026-03-27 00:32:53.962603 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:53.962614 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:53.962624 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:53.962635 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:53.962645 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:53.962656 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:53.962666 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:53.962677 | orchestrator | 2026-03-27 00:32:53.962687 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-27 00:32:53.962721 | orchestrator | Friday 27 March 2026 00:32:42 +0000 (0:00:01.362) 0:06:41.719 ********** 2026-03-27 00:32:53.962732 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:32:53.962743 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:32:53.962753 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:32:53.962764 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:32:53.962775 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:32:53.962792 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:32:53.962837 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:32:53.962856 | orchestrator | 2026-03-27 00:32:53.962875 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-27 00:32:53.962893 | orchestrator | Friday 27 March 2026 00:32:43 +0000 (0:00:01.519) 0:06:43.238 ********** 2026-03-27 00:32:53.962909 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:53.962920 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:53.962931 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:53.962941 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:53.962952 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:53.962962 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:53.962973 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:53.962983 | orchestrator | 2026-03-27 00:32:53.962994 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-27 00:32:53.963004 | orchestrator | Friday 27 March 2026 00:32:45 +0000 (0:00:01.526) 0:06:44.764 ********** 2026-03-27 00:32:53.963015 | orchestrator | changed: [testbed-manager] 2026-03-27 00:32:53.963030 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:32:53.963048 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:32:53.963065 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:32:53.963083 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:32:53.963100 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:32:53.963118 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:32:53.963138 | orchestrator | 2026-03-27 00:32:53.963157 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-27 00:32:53.963175 | orchestrator | Friday 27 March 2026 00:32:46 +0000 (0:00:01.472) 0:06:46.237 ********** 2026-03-27 00:32:53.963194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:32:53.963208 | orchestrator | 2026-03-27 00:32:53.963219 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-27 00:32:53.963229 | orchestrator | Friday 27 March 2026 00:32:47 +0000 (0:00:00.832) 0:06:47.069 ********** 2026-03-27 00:32:53.963256 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:53.963267 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:32:53.963300 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:32:53.963311 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:32:53.963322 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:32:53.963333 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:32:53.963343 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:32:53.963354 | orchestrator | 2026-03-27 00:32:53.963364 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-27 00:32:53.963376 | orchestrator | Friday 27 March 2026 00:32:49 +0000 (0:00:01.519) 0:06:48.589 ********** 2026-03-27 00:32:53.963386 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:53.963397 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:32:53.963407 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:32:53.963418 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:32:53.963428 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:32:53.963439 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:32:53.963449 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:32:53.963460 | orchestrator | 2026-03-27 00:32:53.963481 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-27 00:32:53.963501 | orchestrator | Friday 27 March 2026 00:32:50 +0000 (0:00:01.434) 0:06:50.024 ********** 2026-03-27 00:32:53.963520 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:53.963557 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:32:53.963574 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:32:53.963591 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:32:53.963608 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:32:53.963625 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:32:53.963642 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:32:53.963660 | orchestrator | 2026-03-27 00:32:53.963677 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-27 00:32:53.963696 | orchestrator | Friday 27 March 2026 00:32:51 +0000 (0:00:01.174) 0:06:51.198 ********** 2026-03-27 00:32:53.963715 | orchestrator | ok: [testbed-manager] 2026-03-27 00:32:53.963734 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:32:53.963753 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:32:53.963771 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:32:53.963790 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:32:53.963828 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:32:53.963907 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:32:53.963925 | orchestrator | 2026-03-27 00:32:53.963943 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-27 00:32:53.963961 | orchestrator | Friday 27 March 2026 00:32:52 +0000 (0:00:01.157) 0:06:52.355 ********** 2026-03-27 00:32:53.963979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:32:53.963996 | orchestrator | 2026-03-27 00:32:53.964013 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-27 00:32:53.964032 | orchestrator | Friday 27 March 2026 00:32:53 +0000 (0:00:00.849) 0:06:53.205 ********** 2026-03-27 00:32:53.964050 | orchestrator | 2026-03-27 00:32:53.964067 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-27 00:32:53.964085 | orchestrator | Friday 27 March 2026 00:32:53 +0000 (0:00:00.044) 0:06:53.249 ********** 2026-03-27 00:32:53.964104 | orchestrator | 2026-03-27 00:32:53.964122 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-27 00:32:53.964141 | orchestrator | Friday 27 March 2026 00:32:53 +0000 (0:00:00.188) 0:06:53.437 ********** 2026-03-27 00:32:53.964160 | orchestrator | 2026-03-27 00:32:53.964176 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-27 00:32:53.964204 | orchestrator | Friday 27 March 2026 00:32:53 +0000 (0:00:00.039) 0:06:53.476 ********** 2026-03-27 00:33:21.349269 | orchestrator | 2026-03-27 00:33:21.349386 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-27 00:33:21.349431 | orchestrator | Friday 27 March 2026 00:32:53 +0000 (0:00:00.038) 0:06:53.515 ********** 2026-03-27 00:33:21.349445 | orchestrator | 2026-03-27 00:33:21.349455 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-27 00:33:21.349466 | orchestrator | Friday 27 March 2026 00:32:54 +0000 (0:00:00.043) 0:06:53.559 ********** 2026-03-27 00:33:21.349477 | orchestrator | 2026-03-27 00:33:21.349488 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-27 00:33:21.349499 | orchestrator | Friday 27 March 2026 00:32:54 +0000 (0:00:00.038) 0:06:53.597 ********** 2026-03-27 00:33:21.349528 | orchestrator | 2026-03-27 00:33:21.349540 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-27 00:33:21.349561 | orchestrator | Friday 27 March 2026 00:32:54 +0000 (0:00:00.038) 0:06:53.636 ********** 2026-03-27 00:33:21.349573 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:21.349585 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:21.349596 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:21.349606 | orchestrator | 2026-03-27 00:33:21.349617 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-27 00:33:21.349628 | orchestrator | Friday 27 March 2026 00:32:55 +0000 (0:00:01.324) 0:06:54.961 ********** 2026-03-27 00:33:21.349639 | orchestrator | changed: [testbed-manager] 2026-03-27 00:33:21.349650 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:33:21.349661 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:33:21.349671 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:33:21.349682 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:33:21.349693 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:33:21.349704 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:33:21.349714 | orchestrator | 2026-03-27 00:33:21.349725 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-27 00:33:21.349736 | orchestrator | Friday 27 March 2026 00:32:56 +0000 (0:00:01.542) 0:06:56.503 ********** 2026-03-27 00:33:21.349747 | orchestrator | changed: [testbed-manager] 2026-03-27 00:33:21.349785 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:33:21.349807 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:33:21.349828 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:33:21.349850 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:33:21.349869 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:33:21.349882 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:33:21.349894 | orchestrator | 2026-03-27 00:33:21.349907 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-27 00:33:21.349919 | orchestrator | Friday 27 March 2026 00:32:58 +0000 (0:00:01.191) 0:06:57.694 ********** 2026-03-27 00:33:21.349931 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:33:21.349944 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:33:21.349956 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:33:21.349968 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:33:21.349980 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:33:21.349993 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:33:21.350005 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:33:21.350078 | orchestrator | 2026-03-27 00:33:21.350093 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-27 00:33:21.350107 | orchestrator | Friday 27 March 2026 00:33:00 +0000 (0:00:02.672) 0:07:00.367 ********** 2026-03-27 00:33:21.350128 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:33:21.350147 | orchestrator | 2026-03-27 00:33:21.350166 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-27 00:33:21.350184 | orchestrator | Friday 27 March 2026 00:33:00 +0000 (0:00:00.100) 0:07:00.468 ********** 2026-03-27 00:33:21.350202 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:21.350220 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:33:21.350240 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:33:21.350259 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:33:21.350297 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:33:21.350315 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:33:21.350327 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:33:21.350337 | orchestrator | 2026-03-27 00:33:21.350363 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-27 00:33:21.350375 | orchestrator | Friday 27 March 2026 00:33:02 +0000 (0:00:01.228) 0:07:01.696 ********** 2026-03-27 00:33:21.350385 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:33:21.350396 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:33:21.350407 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:33:21.350417 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:33:21.350428 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:33:21.350438 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:33:21.350449 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:33:21.350460 | orchestrator | 2026-03-27 00:33:21.350470 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-27 00:33:21.350481 | orchestrator | Friday 27 March 2026 00:33:02 +0000 (0:00:00.506) 0:07:02.203 ********** 2026-03-27 00:33:21.350493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:33:21.350506 | orchestrator | 2026-03-27 00:33:21.350517 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-27 00:33:21.350528 | orchestrator | Friday 27 March 2026 00:33:03 +0000 (0:00:00.875) 0:07:03.079 ********** 2026-03-27 00:33:21.350538 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:21.350549 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:21.350560 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:21.350570 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:21.350581 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:21.350591 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:21.350602 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:21.350612 | orchestrator | 2026-03-27 00:33:21.350623 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-27 00:33:21.350634 | orchestrator | Friday 27 March 2026 00:33:04 +0000 (0:00:01.024) 0:07:04.103 ********** 2026-03-27 00:33:21.350644 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-27 00:33:21.350676 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-27 00:33:21.350688 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-27 00:33:21.350699 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-27 00:33:21.350709 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-27 00:33:21.350720 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-27 00:33:21.350730 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-27 00:33:21.350741 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-27 00:33:21.350752 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-27 00:33:21.350816 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-27 00:33:21.350830 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-27 00:33:21.350840 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-27 00:33:21.350851 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-27 00:33:21.350862 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-27 00:33:21.350873 | orchestrator | 2026-03-27 00:33:21.350883 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-27 00:33:21.350894 | orchestrator | Friday 27 March 2026 00:33:07 +0000 (0:00:02.584) 0:07:06.687 ********** 2026-03-27 00:33:21.350905 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:33:21.350916 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:33:21.350926 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:33:21.350945 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:33:21.350956 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:33:21.350967 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:33:21.350978 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:33:21.350988 | orchestrator | 2026-03-27 00:33:21.350999 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-27 00:33:21.351012 | orchestrator | Friday 27 March 2026 00:33:07 +0000 (0:00:00.473) 0:07:07.161 ********** 2026-03-27 00:33:21.351035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:33:21.351056 | orchestrator | 2026-03-27 00:33:21.351076 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-27 00:33:21.351095 | orchestrator | Friday 27 March 2026 00:33:08 +0000 (0:00:00.955) 0:07:08.117 ********** 2026-03-27 00:33:21.351115 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:21.351134 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:21.351153 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:21.351173 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:21.351193 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:21.351212 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:21.351230 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:21.351249 | orchestrator | 2026-03-27 00:33:21.351270 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-27 00:33:21.351292 | orchestrator | Friday 27 March 2026 00:33:09 +0000 (0:00:00.869) 0:07:08.986 ********** 2026-03-27 00:33:21.351313 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:21.351326 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:21.351337 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:21.351347 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:21.351357 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:21.351368 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:21.351378 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:21.351389 | orchestrator | 2026-03-27 00:33:21.351399 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-27 00:33:21.351410 | orchestrator | Friday 27 March 2026 00:33:10 +0000 (0:00:00.810) 0:07:09.797 ********** 2026-03-27 00:33:21.351421 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:33:21.351432 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:33:21.351442 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:33:21.351461 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:33:21.351472 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:33:21.351483 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:33:21.351493 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:33:21.351504 | orchestrator | 2026-03-27 00:33:21.351514 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-27 00:33:21.351525 | orchestrator | Friday 27 March 2026 00:33:10 +0000 (0:00:00.471) 0:07:10.268 ********** 2026-03-27 00:33:21.351536 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:21.351546 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:21.351557 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:21.351567 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:21.351578 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:21.351589 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:21.351599 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:21.351610 | orchestrator | 2026-03-27 00:33:21.351620 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-27 00:33:21.351631 | orchestrator | Friday 27 March 2026 00:33:12 +0000 (0:00:01.540) 0:07:11.809 ********** 2026-03-27 00:33:21.351642 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:33:21.351653 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:33:21.351663 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:33:21.351674 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:33:21.351684 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:33:21.351705 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:33:21.351716 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:33:21.351726 | orchestrator | 2026-03-27 00:33:21.351737 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-27 00:33:21.351749 | orchestrator | Friday 27 March 2026 00:33:12 +0000 (0:00:00.615) 0:07:12.425 ********** 2026-03-27 00:33:21.351800 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:21.351818 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:33:21.351837 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:33:21.351857 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:33:21.351875 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:33:21.351894 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:33:21.351926 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:33:53.890278 | orchestrator | 2026-03-27 00:33:53.890405 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-27 00:33:53.890434 | orchestrator | Friday 27 March 2026 00:33:21 +0000 (0:00:08.538) 0:07:20.963 ********** 2026-03-27 00:33:53.890455 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.890476 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:33:53.890497 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:33:53.890509 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:33:53.890520 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:33:53.890531 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:33:53.890542 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:33:53.890553 | orchestrator | 2026-03-27 00:33:53.890564 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-27 00:33:53.890575 | orchestrator | Friday 27 March 2026 00:33:22 +0000 (0:00:01.358) 0:07:22.322 ********** 2026-03-27 00:33:53.890586 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.890596 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:33:53.890607 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:33:53.890618 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:33:53.890629 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:33:53.890640 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:33:53.890650 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:33:53.890661 | orchestrator | 2026-03-27 00:33:53.890672 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-27 00:33:53.890683 | orchestrator | Friday 27 March 2026 00:33:24 +0000 (0:00:01.757) 0:07:24.079 ********** 2026-03-27 00:33:53.890694 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.890704 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:33:53.890764 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:33:53.890775 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:33:53.890786 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:33:53.890799 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:33:53.890811 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:33:53.890824 | orchestrator | 2026-03-27 00:33:53.890836 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-27 00:33:53.890849 | orchestrator | Friday 27 March 2026 00:33:26 +0000 (0:00:01.903) 0:07:25.983 ********** 2026-03-27 00:33:53.890861 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.890873 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:53.890885 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:53.890897 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:53.890909 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:53.890921 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:53.890933 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:53.890945 | orchestrator | 2026-03-27 00:33:53.890957 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-27 00:33:53.890970 | orchestrator | Friday 27 March 2026 00:33:27 +0000 (0:00:00.855) 0:07:26.838 ********** 2026-03-27 00:33:53.890982 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:33:53.890994 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:33:53.891006 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:33:53.891045 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:33:53.891057 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:33:53.891069 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:33:53.891081 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:33:53.891094 | orchestrator | 2026-03-27 00:33:53.891106 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-27 00:33:53.891119 | orchestrator | Friday 27 March 2026 00:33:28 +0000 (0:00:00.794) 0:07:27.633 ********** 2026-03-27 00:33:53.891131 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:33:53.891143 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:33:53.891154 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:33:53.891165 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:33:53.891175 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:33:53.891186 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:33:53.891197 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:33:53.891207 | orchestrator | 2026-03-27 00:33:53.891218 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-27 00:33:53.891229 | orchestrator | Friday 27 March 2026 00:33:28 +0000 (0:00:00.677) 0:07:28.311 ********** 2026-03-27 00:33:53.891239 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.891250 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:53.891271 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:53.891291 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:53.891310 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:53.891328 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:53.891344 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:53.891364 | orchestrator | 2026-03-27 00:33:53.891385 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-27 00:33:53.891405 | orchestrator | Friday 27 March 2026 00:33:29 +0000 (0:00:00.475) 0:07:28.786 ********** 2026-03-27 00:33:53.891425 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.891444 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:53.891464 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:53.891483 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:53.891501 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:53.891512 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:53.891522 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:53.891533 | orchestrator | 2026-03-27 00:33:53.891544 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-27 00:33:53.891555 | orchestrator | Friday 27 March 2026 00:33:29 +0000 (0:00:00.493) 0:07:29.279 ********** 2026-03-27 00:33:53.891565 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.891576 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:53.891586 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:53.891597 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:53.891607 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:53.891618 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:53.891628 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:53.891639 | orchestrator | 2026-03-27 00:33:53.891650 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-27 00:33:53.891660 | orchestrator | Friday 27 March 2026 00:33:30 +0000 (0:00:00.491) 0:07:29.771 ********** 2026-03-27 00:33:53.891671 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.891681 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:53.891692 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:53.891703 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:53.891737 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:53.891748 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:53.891758 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:53.891769 | orchestrator | 2026-03-27 00:33:53.891801 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-27 00:33:53.891813 | orchestrator | Friday 27 March 2026 00:33:35 +0000 (0:00:05.002) 0:07:34.774 ********** 2026-03-27 00:33:53.891824 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:33:53.891835 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:33:53.891856 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:33:53.891868 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:33:53.891878 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:33:53.891889 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:33:53.891899 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:33:53.891910 | orchestrator | 2026-03-27 00:33:53.891921 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-27 00:33:53.891932 | orchestrator | Friday 27 March 2026 00:33:35 +0000 (0:00:00.675) 0:07:35.450 ********** 2026-03-27 00:33:53.891944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:33:53.891958 | orchestrator | 2026-03-27 00:33:53.891969 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-27 00:33:53.891980 | orchestrator | Friday 27 March 2026 00:33:36 +0000 (0:00:00.779) 0:07:36.229 ********** 2026-03-27 00:33:53.892006 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.892017 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:53.892028 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:53.892039 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:53.892049 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:53.892060 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:53.892070 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:53.892081 | orchestrator | 2026-03-27 00:33:53.892092 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-27 00:33:53.892102 | orchestrator | Friday 27 March 2026 00:33:38 +0000 (0:00:02.187) 0:07:38.417 ********** 2026-03-27 00:33:53.892114 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.892124 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:53.892134 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:53.892145 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:53.892155 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:53.892166 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:53.892177 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:53.892187 | orchestrator | 2026-03-27 00:33:53.892198 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-27 00:33:53.892209 | orchestrator | Friday 27 March 2026 00:33:40 +0000 (0:00:01.352) 0:07:39.770 ********** 2026-03-27 00:33:53.892220 | orchestrator | ok: [testbed-manager] 2026-03-27 00:33:53.892230 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:33:53.892241 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:33:53.892252 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:33:53.892262 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:33:53.892273 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:33:53.892283 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:33:53.892294 | orchestrator | 2026-03-27 00:33:53.892305 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-27 00:33:53.892316 | orchestrator | Friday 27 March 2026 00:33:41 +0000 (0:00:00.849) 0:07:40.619 ********** 2026-03-27 00:33:53.892327 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-27 00:33:53.892338 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-27 00:33:53.892350 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-27 00:33:53.892365 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-27 00:33:53.892377 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-27 00:33:53.892395 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-27 00:33:53.892406 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-27 00:33:53.892417 | orchestrator | 2026-03-27 00:33:53.892428 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-27 00:33:53.892438 | orchestrator | Friday 27 March 2026 00:33:42 +0000 (0:00:01.745) 0:07:42.364 ********** 2026-03-27 00:33:53.892449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:33:53.892467 | orchestrator | 2026-03-27 00:33:53.892486 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-27 00:33:53.892506 | orchestrator | Friday 27 March 2026 00:33:43 +0000 (0:00:00.938) 0:07:43.303 ********** 2026-03-27 00:33:53.892524 | orchestrator | changed: [testbed-manager] 2026-03-27 00:33:53.892543 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:33:53.892563 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:33:53.892581 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:33:53.892592 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:33:53.892603 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:33:53.892613 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:33:53.892624 | orchestrator | 2026-03-27 00:33:53.892644 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-27 00:34:25.424932 | orchestrator | Friday 27 March 2026 00:33:53 +0000 (0:00:10.102) 0:07:53.406 ********** 2026-03-27 00:34:25.425070 | orchestrator | ok: [testbed-manager] 2026-03-27 00:34:25.425097 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:34:25.425114 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:34:25.425133 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:34:25.425151 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:34:25.425170 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:34:25.425189 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:34:25.425206 | orchestrator | 2026-03-27 00:34:25.425228 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-27 00:34:25.425240 | orchestrator | Friday 27 March 2026 00:33:55 +0000 (0:00:01.696) 0:07:55.102 ********** 2026-03-27 00:34:25.425251 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:34:25.425262 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:34:25.425272 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:34:25.425283 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:34:25.425293 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:34:25.425304 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:34:25.425315 | orchestrator | 2026-03-27 00:34:25.425326 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-27 00:34:25.425336 | orchestrator | Friday 27 March 2026 00:33:57 +0000 (0:00:01.513) 0:07:56.616 ********** 2026-03-27 00:34:25.425347 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:25.425359 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:25.425370 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:25.425381 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:25.425391 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:25.425402 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:25.425414 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:25.425426 | orchestrator | 2026-03-27 00:34:25.425439 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-27 00:34:25.425452 | orchestrator | 2026-03-27 00:34:25.425464 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-27 00:34:25.425476 | orchestrator | Friday 27 March 2026 00:33:58 +0000 (0:00:01.338) 0:07:57.954 ********** 2026-03-27 00:34:25.425489 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:34:25.425501 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:34:25.425545 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:34:25.425557 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:34:25.425570 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:34:25.425582 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:34:25.425594 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:34:25.425606 | orchestrator | 2026-03-27 00:34:25.425618 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-27 00:34:25.425631 | orchestrator | 2026-03-27 00:34:25.425643 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-27 00:34:25.425655 | orchestrator | Friday 27 March 2026 00:33:58 +0000 (0:00:00.496) 0:07:58.451 ********** 2026-03-27 00:34:25.425702 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:25.425715 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:25.425727 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:25.425739 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:25.425751 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:25.425764 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:25.425776 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:25.425789 | orchestrator | 2026-03-27 00:34:25.425800 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-27 00:34:25.425811 | orchestrator | Friday 27 March 2026 00:34:00 +0000 (0:00:01.379) 0:07:59.831 ********** 2026-03-27 00:34:25.425821 | orchestrator | ok: [testbed-manager] 2026-03-27 00:34:25.425832 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:34:25.425843 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:34:25.425853 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:34:25.425863 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:34:25.425874 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:34:25.425884 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:34:25.425895 | orchestrator | 2026-03-27 00:34:25.425906 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-27 00:34:25.425916 | orchestrator | Friday 27 March 2026 00:34:01 +0000 (0:00:01.558) 0:08:01.390 ********** 2026-03-27 00:34:25.425927 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:34:25.425952 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:34:25.425963 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:34:25.425974 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:34:25.425984 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:34:25.425995 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:34:25.426005 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:34:25.426069 | orchestrator | 2026-03-27 00:34:25.426081 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-27 00:34:25.426092 | orchestrator | Friday 27 March 2026 00:34:02 +0000 (0:00:00.464) 0:08:01.854 ********** 2026-03-27 00:34:25.426103 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:34:25.426116 | orchestrator | 2026-03-27 00:34:25.426127 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-27 00:34:25.426138 | orchestrator | Friday 27 March 2026 00:34:03 +0000 (0:00:00.799) 0:08:02.654 ********** 2026-03-27 00:34:25.426151 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:34:25.426165 | orchestrator | 2026-03-27 00:34:25.426175 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-27 00:34:25.426186 | orchestrator | Friday 27 March 2026 00:34:04 +0000 (0:00:00.927) 0:08:03.581 ********** 2026-03-27 00:34:25.426197 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:25.426207 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:25.426218 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:25.426228 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:25.426250 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:25.426261 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:25.426271 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:25.426282 | orchestrator | 2026-03-27 00:34:25.426314 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-27 00:34:25.426325 | orchestrator | Friday 27 March 2026 00:34:14 +0000 (0:00:10.145) 0:08:13.727 ********** 2026-03-27 00:34:25.426336 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:25.426346 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:25.426357 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:25.426367 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:25.426378 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:25.426388 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:25.426399 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:25.426409 | orchestrator | 2026-03-27 00:34:25.426420 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-27 00:34:25.426431 | orchestrator | Friday 27 March 2026 00:34:15 +0000 (0:00:00.814) 0:08:14.541 ********** 2026-03-27 00:34:25.426442 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:25.426452 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:25.426462 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:25.426473 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:25.426483 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:25.426494 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:25.426504 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:25.426515 | orchestrator | 2026-03-27 00:34:25.426525 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-27 00:34:25.426536 | orchestrator | Friday 27 March 2026 00:34:16 +0000 (0:00:01.367) 0:08:15.909 ********** 2026-03-27 00:34:25.426546 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:25.426557 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:25.426567 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:25.426578 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:25.426588 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:25.426599 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:25.426609 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:25.426620 | orchestrator | 2026-03-27 00:34:25.426630 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-27 00:34:25.426641 | orchestrator | Friday 27 March 2026 00:34:18 +0000 (0:00:01.902) 0:08:17.812 ********** 2026-03-27 00:34:25.426651 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:25.426683 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:25.426697 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:25.426707 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:25.426718 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:25.426728 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:25.426738 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:25.426749 | orchestrator | 2026-03-27 00:34:25.426759 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-27 00:34:25.426770 | orchestrator | Friday 27 March 2026 00:34:19 +0000 (0:00:01.223) 0:08:19.035 ********** 2026-03-27 00:34:25.426780 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:25.426791 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:25.426801 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:25.426812 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:25.426822 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:25.426833 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:25.426843 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:25.426854 | orchestrator | 2026-03-27 00:34:25.426864 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-27 00:34:25.426874 | orchestrator | 2026-03-27 00:34:25.426885 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-27 00:34:25.426896 | orchestrator | Friday 27 March 2026 00:34:20 +0000 (0:00:01.215) 0:08:20.250 ********** 2026-03-27 00:34:25.426914 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:34:25.426926 | orchestrator | 2026-03-27 00:34:25.426936 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-27 00:34:25.426947 | orchestrator | Friday 27 March 2026 00:34:21 +0000 (0:00:00.962) 0:08:21.213 ********** 2026-03-27 00:34:25.426957 | orchestrator | ok: [testbed-manager] 2026-03-27 00:34:25.426974 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:34:25.426985 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:34:25.426995 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:34:25.427006 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:34:25.427016 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:34:25.427027 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:34:25.427037 | orchestrator | 2026-03-27 00:34:25.427048 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-27 00:34:25.427058 | orchestrator | Friday 27 March 2026 00:34:22 +0000 (0:00:00.811) 0:08:22.024 ********** 2026-03-27 00:34:25.427069 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:25.427079 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:25.427089 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:25.427100 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:25.427111 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:25.427121 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:25.427131 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:25.427142 | orchestrator | 2026-03-27 00:34:25.427152 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-27 00:34:25.427163 | orchestrator | Friday 27 March 2026 00:34:23 +0000 (0:00:01.218) 0:08:23.242 ********** 2026-03-27 00:34:25.427174 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:34:25.427184 | orchestrator | 2026-03-27 00:34:25.427195 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-27 00:34:25.427205 | orchestrator | Friday 27 March 2026 00:34:24 +0000 (0:00:00.785) 0:08:24.027 ********** 2026-03-27 00:34:25.427216 | orchestrator | ok: [testbed-manager] 2026-03-27 00:34:25.427226 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:34:25.427237 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:34:25.427247 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:34:25.427258 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:34:25.427268 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:34:25.427278 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:34:25.427289 | orchestrator | 2026-03-27 00:34:25.427306 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-27 00:34:26.959548 | orchestrator | Friday 27 March 2026 00:34:25 +0000 (0:00:00.908) 0:08:24.936 ********** 2026-03-27 00:34:26.959652 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:26.959739 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:26.959754 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:26.959766 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:26.959776 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:26.959787 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:26.959798 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:26.959809 | orchestrator | 2026-03-27 00:34:26.959821 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:34:26.959833 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-27 00:34:26.959845 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-27 00:34:26.959856 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-27 00:34:26.959897 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-27 00:34:26.959909 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-27 00:34:26.959920 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-27 00:34:26.959931 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-27 00:34:26.959941 | orchestrator | 2026-03-27 00:34:26.959952 | orchestrator | 2026-03-27 00:34:26.959963 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:34:26.959974 | orchestrator | Friday 27 March 2026 00:34:26 +0000 (0:00:01.242) 0:08:26.178 ********** 2026-03-27 00:34:26.959985 | orchestrator | =============================================================================== 2026-03-27 00:34:26.959995 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.05s 2026-03-27 00:34:26.960006 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.79s 2026-03-27 00:34:26.960017 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.02s 2026-03-27 00:34:26.960027 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.62s 2026-03-27 00:34:26.960038 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.34s 2026-03-27 00:34:26.960048 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.52s 2026-03-27 00:34:26.960059 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.92s 2026-03-27 00:34:26.960071 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.67s 2026-03-27 00:34:26.960082 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.15s 2026-03-27 00:34:26.960094 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.10s 2026-03-27 00:34:26.960107 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.94s 2026-03-27 00:34:26.960135 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.25s 2026-03-27 00:34:26.960148 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.09s 2026-03-27 00:34:26.960161 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.02s 2026-03-27 00:34:26.960173 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.66s 2026-03-27 00:34:26.960185 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.54s 2026-03-27 00:34:26.960197 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.59s 2026-03-27 00:34:26.960209 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.06s 2026-03-27 00:34:26.960220 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.78s 2026-03-27 00:34:26.960233 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.62s 2026-03-27 00:34:27.148009 | orchestrator | + osism apply fail2ban 2026-03-27 00:34:38.733224 | orchestrator | 2026-03-27 00:34:38 | INFO  | Prepare task for execution of fail2ban. 2026-03-27 00:34:38.806761 | orchestrator | 2026-03-27 00:34:38 | INFO  | Task 96c37a5e-3008-4cdd-8e39-5c46a2120a7a (fail2ban) was prepared for execution. 2026-03-27 00:34:38.806880 | orchestrator | 2026-03-27 00:34:38 | INFO  | It takes a moment until task 96c37a5e-3008-4cdd-8e39-5c46a2120a7a (fail2ban) has been started and output is visible here. 2026-03-27 00:34:58.982425 | orchestrator | 2026-03-27 00:34:58.982536 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-27 00:34:58.982581 | orchestrator | 2026-03-27 00:34:58.982594 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-27 00:34:58.982653 | orchestrator | Friday 27 March 2026 00:34:41 +0000 (0:00:00.283) 0:00:00.283 ********** 2026-03-27 00:34:58.982670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:34:58.982684 | orchestrator | 2026-03-27 00:34:58.982695 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-27 00:34:58.982706 | orchestrator | Friday 27 March 2026 00:34:42 +0000 (0:00:01.000) 0:00:01.283 ********** 2026-03-27 00:34:58.982717 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:58.982730 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:58.982740 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:58.982751 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:58.982762 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:58.982773 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:58.982783 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:58.982794 | orchestrator | 2026-03-27 00:34:58.982805 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-27 00:34:58.982816 | orchestrator | Friday 27 March 2026 00:34:54 +0000 (0:00:11.382) 0:00:12.666 ********** 2026-03-27 00:34:58.982826 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:58.982837 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:58.982848 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:58.982858 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:58.982869 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:58.982880 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:58.982891 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:58.982901 | orchestrator | 2026-03-27 00:34:58.982912 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-27 00:34:58.982923 | orchestrator | Friday 27 March 2026 00:34:55 +0000 (0:00:01.587) 0:00:14.253 ********** 2026-03-27 00:34:58.982934 | orchestrator | ok: [testbed-manager] 2026-03-27 00:34:58.982946 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:34:58.982956 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:34:58.982969 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:34:58.982981 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:34:58.982993 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:34:58.983005 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:34:58.983017 | orchestrator | 2026-03-27 00:34:58.983030 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-27 00:34:58.983042 | orchestrator | Friday 27 March 2026 00:34:57 +0000 (0:00:01.218) 0:00:15.472 ********** 2026-03-27 00:34:58.983054 | orchestrator | changed: [testbed-manager] 2026-03-27 00:34:58.983067 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:34:58.983080 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:34:58.983092 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:34:58.983104 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:34:58.983117 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:34:58.983130 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:34:58.983142 | orchestrator | 2026-03-27 00:34:58.983154 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:34:58.983167 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:34:58.983180 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:34:58.983193 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:34:58.983205 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:34:58.983248 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:34:58.983268 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:34:58.983286 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:34:58.983302 | orchestrator | 2026-03-27 00:34:58.983319 | orchestrator | 2026-03-27 00:34:58.983336 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:34:58.983353 | orchestrator | Friday 27 March 2026 00:34:58 +0000 (0:00:01.615) 0:00:17.087 ********** 2026-03-27 00:34:58.983371 | orchestrator | =============================================================================== 2026-03-27 00:34:58.983388 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.38s 2026-03-27 00:34:58.983405 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.62s 2026-03-27 00:34:58.983422 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.59s 2026-03-27 00:34:58.983440 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.22s 2026-03-27 00:34:58.983459 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.00s 2026-03-27 00:34:59.149600 | orchestrator | + osism apply network 2026-03-27 00:35:10.427215 | orchestrator | 2026-03-27 00:35:10 | INFO  | Prepare task for execution of network. 2026-03-27 00:35:10.500702 | orchestrator | 2026-03-27 00:35:10 | INFO  | Task 2b251ec1-7366-4622-8c45-fb26801f6760 (network) was prepared for execution. 2026-03-27 00:35:10.500809 | orchestrator | 2026-03-27 00:35:10 | INFO  | It takes a moment until task 2b251ec1-7366-4622-8c45-fb26801f6760 (network) has been started and output is visible here. 2026-03-27 00:35:37.299480 | orchestrator | 2026-03-27 00:35:37.299742 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-27 00:35:37.299777 | orchestrator | 2026-03-27 00:35:37.299796 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-27 00:35:37.299815 | orchestrator | Friday 27 March 2026 00:35:13 +0000 (0:00:00.245) 0:00:00.245 ********** 2026-03-27 00:35:37.299834 | orchestrator | ok: [testbed-manager] 2026-03-27 00:35:37.299852 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:35:37.299870 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:35:37.299887 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:35:37.299906 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:35:37.299924 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:35:37.299942 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:35:37.299959 | orchestrator | 2026-03-27 00:35:37.299976 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-27 00:35:37.299995 | orchestrator | Friday 27 March 2026 00:35:14 +0000 (0:00:00.523) 0:00:00.769 ********** 2026-03-27 00:35:37.300018 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:35:37.300041 | orchestrator | 2026-03-27 00:35:37.300061 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-27 00:35:37.300083 | orchestrator | Friday 27 March 2026 00:35:15 +0000 (0:00:01.021) 0:00:01.790 ********** 2026-03-27 00:35:37.300103 | orchestrator | ok: [testbed-manager] 2026-03-27 00:35:37.300125 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:35:37.300143 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:35:37.300163 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:35:37.300181 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:35:37.300201 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:35:37.300266 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:35:37.300285 | orchestrator | 2026-03-27 00:35:37.300303 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-27 00:35:37.300320 | orchestrator | Friday 27 March 2026 00:35:17 +0000 (0:00:02.622) 0:00:04.412 ********** 2026-03-27 00:35:37.300335 | orchestrator | ok: [testbed-manager] 2026-03-27 00:35:37.300350 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:35:37.300364 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:35:37.300379 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:35:37.300394 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:35:37.300408 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:35:37.300423 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:35:37.300439 | orchestrator | 2026-03-27 00:35:37.300455 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-27 00:35:37.300470 | orchestrator | Friday 27 March 2026 00:35:19 +0000 (0:00:01.628) 0:00:06.041 ********** 2026-03-27 00:35:37.300485 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-27 00:35:37.300500 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-27 00:35:37.300515 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-27 00:35:37.300530 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-27 00:35:37.300568 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-27 00:35:37.300581 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-27 00:35:37.300594 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-27 00:35:37.300607 | orchestrator | 2026-03-27 00:35:37.300620 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-27 00:35:37.300633 | orchestrator | Friday 27 March 2026 00:35:20 +0000 (0:00:01.172) 0:00:07.214 ********** 2026-03-27 00:35:37.300646 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-27 00:35:37.300663 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 00:35:37.300676 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-27 00:35:37.300689 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-27 00:35:37.300702 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-27 00:35:37.300715 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 00:35:37.300728 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-27 00:35:37.300742 | orchestrator | 2026-03-27 00:35:37.300755 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-27 00:35:37.300769 | orchestrator | Friday 27 March 2026 00:35:23 +0000 (0:00:03.408) 0:00:10.622 ********** 2026-03-27 00:35:37.300782 | orchestrator | changed: [testbed-manager] 2026-03-27 00:35:37.300796 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:35:37.300808 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:35:37.300821 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:35:37.300835 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:35:37.300848 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:35:37.300861 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:35:37.300874 | orchestrator | 2026-03-27 00:35:37.300888 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-27 00:35:37.300901 | orchestrator | Friday 27 March 2026 00:35:25 +0000 (0:00:01.609) 0:00:12.232 ********** 2026-03-27 00:35:37.300914 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 00:35:37.300927 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 00:35:37.300940 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-27 00:35:37.300953 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-27 00:35:37.300966 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-27 00:35:37.300979 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-27 00:35:37.300992 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-27 00:35:37.301005 | orchestrator | 2026-03-27 00:35:37.301019 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-27 00:35:37.301033 | orchestrator | Friday 27 March 2026 00:35:27 +0000 (0:00:01.921) 0:00:14.154 ********** 2026-03-27 00:35:37.301075 | orchestrator | ok: [testbed-manager] 2026-03-27 00:35:37.301089 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:35:37.301102 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:35:37.301114 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:35:37.301127 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:35:37.301140 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:35:37.301154 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:35:37.301166 | orchestrator | 2026-03-27 00:35:37.301180 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-27 00:35:37.301224 | orchestrator | Friday 27 March 2026 00:35:28 +0000 (0:00:00.907) 0:00:15.061 ********** 2026-03-27 00:35:37.301238 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:35:37.301251 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:35:37.301264 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:35:37.301277 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:35:37.301290 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:35:37.301327 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:35:37.301342 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:35:37.301355 | orchestrator | 2026-03-27 00:35:37.301368 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-27 00:35:37.301381 | orchestrator | Friday 27 March 2026 00:35:29 +0000 (0:00:00.739) 0:00:15.801 ********** 2026-03-27 00:35:37.301395 | orchestrator | ok: [testbed-manager] 2026-03-27 00:35:37.301408 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:35:37.301422 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:35:37.301435 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:35:37.301448 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:35:37.301461 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:35:37.301474 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:35:37.301488 | orchestrator | 2026-03-27 00:35:37.301501 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-27 00:35:37.301515 | orchestrator | Friday 27 March 2026 00:35:31 +0000 (0:00:02.217) 0:00:18.019 ********** 2026-03-27 00:35:37.301528 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:35:37.301541 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:35:37.301578 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:35:37.301591 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:35:37.301605 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:35:37.301618 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:35:37.301633 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-03-27 00:35:37.301648 | orchestrator | 2026-03-27 00:35:37.301661 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-27 00:35:37.301674 | orchestrator | Friday 27 March 2026 00:35:32 +0000 (0:00:00.807) 0:00:18.826 ********** 2026-03-27 00:35:37.301687 | orchestrator | ok: [testbed-manager] 2026-03-27 00:35:37.301700 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:35:37.301714 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:35:37.301727 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:35:37.301740 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:35:37.301754 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:35:37.301767 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:35:37.301780 | orchestrator | 2026-03-27 00:35:37.301792 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-27 00:35:37.301805 | orchestrator | Friday 27 March 2026 00:35:33 +0000 (0:00:01.363) 0:00:20.190 ********** 2026-03-27 00:35:37.301819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:35:37.301836 | orchestrator | 2026-03-27 00:35:37.301849 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-27 00:35:37.301862 | orchestrator | Friday 27 March 2026 00:35:34 +0000 (0:00:01.049) 0:00:21.240 ********** 2026-03-27 00:35:37.301888 | orchestrator | ok: [testbed-manager] 2026-03-27 00:35:37.301902 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:35:37.301915 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:35:37.301928 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:35:37.301941 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:35:37.301954 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:35:37.301967 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:35:37.301980 | orchestrator | 2026-03-27 00:35:37.301991 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-27 00:35:37.302002 | orchestrator | Friday 27 March 2026 00:35:35 +0000 (0:00:01.066) 0:00:22.306 ********** 2026-03-27 00:35:37.302013 | orchestrator | ok: [testbed-manager] 2026-03-27 00:35:37.302105 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:35:37.302125 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:35:37.302137 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:35:37.302148 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:35:37.302158 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:35:37.302169 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:35:37.302180 | orchestrator | 2026-03-27 00:35:37.302190 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-27 00:35:37.302201 | orchestrator | Friday 27 March 2026 00:35:36 +0000 (0:00:00.670) 0:00:22.976 ********** 2026-03-27 00:35:37.302213 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-27 00:35:37.302225 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-27 00:35:37.302237 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-27 00:35:37.302248 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-27 00:35:37.302259 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-27 00:35:37.302270 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-27 00:35:37.302281 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-27 00:35:37.302291 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-27 00:35:37.302301 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-27 00:35:37.302311 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-27 00:35:37.302322 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-27 00:35:37.302333 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-27 00:35:37.302344 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-27 00:35:37.302356 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-27 00:35:37.302367 | orchestrator | 2026-03-27 00:35:37.302390 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-27 00:35:51.041437 | orchestrator | Friday 27 March 2026 00:35:37 +0000 (0:00:00.938) 0:00:23.915 ********** 2026-03-27 00:35:51.041631 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:35:51.041658 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:35:51.041679 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:35:51.041697 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:35:51.041711 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:35:51.041720 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:35:51.041730 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:35:51.041740 | orchestrator | 2026-03-27 00:35:51.041752 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-27 00:35:51.041762 | orchestrator | Friday 27 March 2026 00:35:37 +0000 (0:00:00.694) 0:00:24.609 ********** 2026-03-27 00:35:51.041774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-5, testbed-node-4, testbed-node-3 2026-03-27 00:35:51.041816 | orchestrator | 2026-03-27 00:35:51.041827 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-27 00:35:51.041837 | orchestrator | Friday 27 March 2026 00:35:41 +0000 (0:00:03.772) 0:00:28.382 ********** 2026-03-27 00:35:51.041849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.041859 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-27 00:35:51.041871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.041882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.041892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.041901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.041928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.041940 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-27 00:35:51.041959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-27 00:35:51.041970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-27 00:35:51.041982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-27 00:35:51.042012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-27 00:35:51.042082 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-27 00:35:51.042105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-27 00:35:51.042116 | orchestrator | 2026-03-27 00:35:51.042127 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-27 00:35:51.042138 | orchestrator | Friday 27 March 2026 00:35:46 +0000 (0:00:04.764) 0:00:33.146 ********** 2026-03-27 00:35:51.042149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.042161 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-27 00:35:51.042172 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.042183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.042195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.042207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-27 00:35:51.042218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.042235 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-27 00:35:51.042247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-27 00:35:51.042258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-27 00:35:51.042270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-27 00:35:51.042281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-27 00:35:51.042327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-27 00:36:02.933499 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-27 00:36:02.933676 | orchestrator | 2026-03-27 00:36:02.933693 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-27 00:36:02.933707 | orchestrator | Friday 27 March 2026 00:35:51 +0000 (0:00:05.076) 0:00:38.223 ********** 2026-03-27 00:36:02.933722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:36:02.933733 | orchestrator | 2026-03-27 00:36:02.933744 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-27 00:36:02.933756 | orchestrator | Friday 27 March 2026 00:35:52 +0000 (0:00:01.002) 0:00:39.225 ********** 2026-03-27 00:36:02.933767 | orchestrator | ok: [testbed-manager] 2026-03-27 00:36:02.933779 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:36:02.933790 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:36:02.933801 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:36:02.933812 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:36:02.933823 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:36:02.933833 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:36:02.933844 | orchestrator | 2026-03-27 00:36:02.933855 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-27 00:36:02.933866 | orchestrator | Friday 27 March 2026 00:35:53 +0000 (0:00:01.019) 0:00:40.244 ********** 2026-03-27 00:36:02.933877 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-27 00:36:02.933888 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-27 00:36:02.933899 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-27 00:36:02.933910 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-27 00:36:02.933920 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-27 00:36:02.933931 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-27 00:36:02.933942 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-27 00:36:02.933952 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-27 00:36:02.933963 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:36:02.933974 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-27 00:36:02.933985 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-27 00:36:02.933998 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-27 00:36:02.934101 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-27 00:36:02.934128 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:36:02.934149 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-27 00:36:02.934194 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-27 00:36:02.934215 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-27 00:36:02.934235 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-27 00:36:02.934289 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:36:02.934310 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-27 00:36:02.934331 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-27 00:36:02.934349 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-27 00:36:02.934368 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-27 00:36:02.934386 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:36:02.934405 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-27 00:36:02.934424 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-27 00:36:02.934443 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-27 00:36:02.934463 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-27 00:36:02.934482 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:36:02.934500 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:36:02.934572 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-27 00:36:02.934591 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-27 00:36:02.934610 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-27 00:36:02.934629 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-27 00:36:02.934647 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:36:02.934666 | orchestrator | 2026-03-27 00:36:02.934686 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-27 00:36:02.934731 | orchestrator | Friday 27 March 2026 00:35:54 +0000 (0:00:00.688) 0:00:40.932 ********** 2026-03-27 00:36:02.934753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:36:02.934773 | orchestrator | 2026-03-27 00:36:02.934793 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-27 00:36:02.934812 | orchestrator | Friday 27 March 2026 00:35:55 +0000 (0:00:01.081) 0:00:42.014 ********** 2026-03-27 00:36:02.934831 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:36:02.934849 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:36:02.934868 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:36:02.934887 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:36:02.934907 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:36:02.934925 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:36:02.934943 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:36:02.934963 | orchestrator | 2026-03-27 00:36:02.934981 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-27 00:36:02.935000 | orchestrator | Friday 27 March 2026 00:35:56 +0000 (0:00:00.644) 0:00:42.658 ********** 2026-03-27 00:36:02.935019 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:36:02.935038 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:36:02.935056 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:36:02.935075 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:36:02.935093 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:36:02.935112 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:36:02.935132 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:36:02.935151 | orchestrator | 2026-03-27 00:36:02.935169 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-27 00:36:02.935188 | orchestrator | Friday 27 March 2026 00:35:56 +0000 (0:00:00.539) 0:00:43.198 ********** 2026-03-27 00:36:02.935207 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:36:02.935225 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:36:02.935261 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:36:02.935280 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:36:02.935318 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:36:02.935337 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:36:02.935356 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:36:02.935375 | orchestrator | 2026-03-27 00:36:02.935394 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-27 00:36:02.935412 | orchestrator | Friday 27 March 2026 00:35:57 +0000 (0:00:00.620) 0:00:43.819 ********** 2026-03-27 00:36:02.935430 | orchestrator | ok: [testbed-manager] 2026-03-27 00:36:02.935449 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:36:02.935468 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:36:02.935486 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:36:02.935537 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:36:02.935551 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:36:02.935561 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:36:02.935572 | orchestrator | 2026-03-27 00:36:02.935583 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-27 00:36:02.935594 | orchestrator | Friday 27 March 2026 00:35:58 +0000 (0:00:01.596) 0:00:45.415 ********** 2026-03-27 00:36:02.935604 | orchestrator | ok: [testbed-manager] 2026-03-27 00:36:02.935615 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:36:02.935625 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:36:02.935636 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:36:02.935646 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:36:02.935657 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:36:02.935667 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:36:02.935677 | orchestrator | 2026-03-27 00:36:02.935688 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-27 00:36:02.935707 | orchestrator | Friday 27 March 2026 00:35:59 +0000 (0:00:01.025) 0:00:46.441 ********** 2026-03-27 00:36:02.935718 | orchestrator | ok: [testbed-manager] 2026-03-27 00:36:02.935729 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:36:02.935739 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:36:02.935749 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:36:02.935760 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:36:02.935770 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:36:02.935781 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:36:02.935791 | orchestrator | 2026-03-27 00:36:02.935809 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-27 00:36:02.935827 | orchestrator | Friday 27 March 2026 00:36:01 +0000 (0:00:01.999) 0:00:48.440 ********** 2026-03-27 00:36:02.935846 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:36:02.935864 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:36:02.935883 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:36:02.935902 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:36:02.935921 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:36:02.935941 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:36:02.935959 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:36:02.935978 | orchestrator | 2026-03-27 00:36:02.935997 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-27 00:36:02.936014 | orchestrator | Friday 27 March 2026 00:36:02 +0000 (0:00:00.544) 0:00:48.984 ********** 2026-03-27 00:36:02.936033 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:36:02.936053 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:36:02.936072 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:36:02.936090 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:36:02.936108 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:36:02.936127 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:36:02.936146 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:36:02.936164 | orchestrator | 2026-03-27 00:36:02.936183 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:36:02.936204 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-27 00:36:02.936238 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 00:36:02.936270 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 00:36:03.087917 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 00:36:03.088042 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 00:36:03.088057 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 00:36:03.088070 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 00:36:03.088081 | orchestrator | 2026-03-27 00:36:03.088093 | orchestrator | 2026-03-27 00:36:03.088105 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:36:03.088118 | orchestrator | Friday 27 March 2026 00:36:02 +0000 (0:00:00.564) 0:00:49.549 ********** 2026-03-27 00:36:03.088129 | orchestrator | =============================================================================== 2026-03-27 00:36:03.088140 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.08s 2026-03-27 00:36:03.088151 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.76s 2026-03-27 00:36:03.088161 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.77s 2026-03-27 00:36:03.088172 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.41s 2026-03-27 00:36:03.088183 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.62s 2026-03-27 00:36:03.088194 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.22s 2026-03-27 00:36:03.088204 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.00s 2026-03-27 00:36:03.088215 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.92s 2026-03-27 00:36:03.088226 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.63s 2026-03-27 00:36:03.088237 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.61s 2026-03-27 00:36:03.088248 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.60s 2026-03-27 00:36:03.088258 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.36s 2026-03-27 00:36:03.088269 | orchestrator | osism.commons.network : Create required directories --------------------- 1.17s 2026-03-27 00:36:03.088280 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.08s 2026-03-27 00:36:03.088291 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.07s 2026-03-27 00:36:03.088301 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.05s 2026-03-27 00:36:03.088312 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.03s 2026-03-27 00:36:03.088323 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.02s 2026-03-27 00:36:03.088336 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.02s 2026-03-27 00:36:03.088354 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.00s 2026-03-27 00:36:03.197700 | orchestrator | + osism apply wireguard 2026-03-27 00:36:14.370592 | orchestrator | 2026-03-27 00:36:14 | INFO  | Prepare task for execution of wireguard. 2026-03-27 00:36:14.446306 | orchestrator | 2026-03-27 00:36:14 | INFO  | Task a3d45bed-b9da-4302-98e6-dfa9598f71c6 (wireguard) was prepared for execution. 2026-03-27 00:36:14.446428 | orchestrator | 2026-03-27 00:36:14 | INFO  | It takes a moment until task a3d45bed-b9da-4302-98e6-dfa9598f71c6 (wireguard) has been started and output is visible here. 2026-03-27 00:36:32.247275 | orchestrator | 2026-03-27 00:36:32.247381 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-27 00:36:32.247403 | orchestrator | 2026-03-27 00:36:32.247416 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-27 00:36:32.247427 | orchestrator | Friday 27 March 2026 00:36:17 +0000 (0:00:00.287) 0:00:00.287 ********** 2026-03-27 00:36:32.247439 | orchestrator | ok: [testbed-manager] 2026-03-27 00:36:32.247451 | orchestrator | 2026-03-27 00:36:32.247518 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-27 00:36:32.247530 | orchestrator | Friday 27 March 2026 00:36:18 +0000 (0:00:01.492) 0:00:01.779 ********** 2026-03-27 00:36:32.247541 | orchestrator | changed: [testbed-manager] 2026-03-27 00:36:32.247553 | orchestrator | 2026-03-27 00:36:32.247564 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-27 00:36:32.247575 | orchestrator | Friday 27 March 2026 00:36:24 +0000 (0:00:05.986) 0:00:07.766 ********** 2026-03-27 00:36:32.247586 | orchestrator | changed: [testbed-manager] 2026-03-27 00:36:32.247596 | orchestrator | 2026-03-27 00:36:32.247607 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-27 00:36:32.247618 | orchestrator | Friday 27 March 2026 00:36:25 +0000 (0:00:00.522) 0:00:08.288 ********** 2026-03-27 00:36:32.247629 | orchestrator | changed: [testbed-manager] 2026-03-27 00:36:32.247639 | orchestrator | 2026-03-27 00:36:32.247650 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-27 00:36:32.247661 | orchestrator | Friday 27 March 2026 00:36:25 +0000 (0:00:00.421) 0:00:08.710 ********** 2026-03-27 00:36:32.247672 | orchestrator | ok: [testbed-manager] 2026-03-27 00:36:32.247682 | orchestrator | 2026-03-27 00:36:32.247693 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-27 00:36:32.247704 | orchestrator | Friday 27 March 2026 00:36:26 +0000 (0:00:00.536) 0:00:09.246 ********** 2026-03-27 00:36:32.247715 | orchestrator | ok: [testbed-manager] 2026-03-27 00:36:32.247725 | orchestrator | 2026-03-27 00:36:32.247736 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-27 00:36:32.247747 | orchestrator | Friday 27 March 2026 00:36:26 +0000 (0:00:00.401) 0:00:09.648 ********** 2026-03-27 00:36:32.247758 | orchestrator | ok: [testbed-manager] 2026-03-27 00:36:32.247768 | orchestrator | 2026-03-27 00:36:32.247779 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-27 00:36:32.247790 | orchestrator | Friday 27 March 2026 00:36:27 +0000 (0:00:00.403) 0:00:10.051 ********** 2026-03-27 00:36:32.247800 | orchestrator | changed: [testbed-manager] 2026-03-27 00:36:32.247812 | orchestrator | 2026-03-27 00:36:32.247825 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-27 00:36:32.247838 | orchestrator | Friday 27 March 2026 00:36:28 +0000 (0:00:01.132) 0:00:11.184 ********** 2026-03-27 00:36:32.247851 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-27 00:36:32.247871 | orchestrator | changed: [testbed-manager] 2026-03-27 00:36:32.247890 | orchestrator | 2026-03-27 00:36:32.247908 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-27 00:36:32.247926 | orchestrator | Friday 27 March 2026 00:36:29 +0000 (0:00:00.872) 0:00:12.057 ********** 2026-03-27 00:36:32.247946 | orchestrator | changed: [testbed-manager] 2026-03-27 00:36:32.247967 | orchestrator | 2026-03-27 00:36:32.247986 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-27 00:36:32.248006 | orchestrator | Friday 27 March 2026 00:36:31 +0000 (0:00:01.918) 0:00:13.975 ********** 2026-03-27 00:36:32.248019 | orchestrator | changed: [testbed-manager] 2026-03-27 00:36:32.248031 | orchestrator | 2026-03-27 00:36:32.248043 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:36:32.248056 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:36:32.248099 | orchestrator | 2026-03-27 00:36:32.248112 | orchestrator | 2026-03-27 00:36:32.248144 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:36:32.248168 | orchestrator | Friday 27 March 2026 00:36:32 +0000 (0:00:00.899) 0:00:14.875 ********** 2026-03-27 00:36:32.248181 | orchestrator | =============================================================================== 2026-03-27 00:36:32.248193 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.99s 2026-03-27 00:36:32.248204 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.92s 2026-03-27 00:36:32.248215 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.49s 2026-03-27 00:36:32.248225 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.13s 2026-03-27 00:36:32.248236 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2026-03-27 00:36:32.248246 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.87s 2026-03-27 00:36:32.248257 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2026-03-27 00:36:32.248268 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.52s 2026-03-27 00:36:32.248297 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2026-03-27 00:36:32.248315 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2026-03-27 00:36:32.248326 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-03-27 00:36:32.435436 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-27 00:36:32.469199 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-27 00:36:32.469281 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-27 00:36:32.544538 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 201 0 --:--:-- --:--:-- --:--:-- 202 2026-03-27 00:36:32.558811 | orchestrator | + osism apply --environment custom workarounds 2026-03-27 00:36:33.772806 | orchestrator | 2026-03-27 00:36:33 | INFO  | Trying to run play workarounds in environment custom 2026-03-27 00:36:43.877201 | orchestrator | 2026-03-27 00:36:43 | INFO  | Prepare task for execution of workarounds. 2026-03-27 00:36:43.955157 | orchestrator | 2026-03-27 00:36:43 | INFO  | Task 8c24014c-41b6-46de-ac60-73c4dbafd920 (workarounds) was prepared for execution. 2026-03-27 00:36:43.955244 | orchestrator | 2026-03-27 00:36:43 | INFO  | It takes a moment until task 8c24014c-41b6-46de-ac60-73c4dbafd920 (workarounds) has been started and output is visible here. 2026-03-27 00:37:07.873806 | orchestrator | 2026-03-27 00:37:07.873919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:37:07.873938 | orchestrator | 2026-03-27 00:37:07.873950 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-27 00:37:07.873962 | orchestrator | Friday 27 March 2026 00:36:47 +0000 (0:00:00.175) 0:00:00.175 ********** 2026-03-27 00:37:07.873973 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-27 00:37:07.873984 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-27 00:37:07.873995 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-27 00:37:07.874006 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-27 00:37:07.874064 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-27 00:37:07.874079 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-27 00:37:07.874090 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-27 00:37:07.874101 | orchestrator | 2026-03-27 00:37:07.874136 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-27 00:37:07.874148 | orchestrator | 2026-03-27 00:37:07.874159 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-27 00:37:07.874170 | orchestrator | Friday 27 March 2026 00:36:47 +0000 (0:00:00.611) 0:00:00.786 ********** 2026-03-27 00:37:07.874181 | orchestrator | ok: [testbed-manager] 2026-03-27 00:37:07.874193 | orchestrator | 2026-03-27 00:37:07.874204 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-27 00:37:07.874214 | orchestrator | 2026-03-27 00:37:07.874225 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-27 00:37:07.874245 | orchestrator | Friday 27 March 2026 00:36:49 +0000 (0:00:02.336) 0:00:03.122 ********** 2026-03-27 00:37:07.874265 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:37:07.874310 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:37:07.874328 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:37:07.874345 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:37:07.874359 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:37:07.874371 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:37:07.874383 | orchestrator | 2026-03-27 00:37:07.874396 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-27 00:37:07.874442 | orchestrator | 2026-03-27 00:37:07.874455 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-27 00:37:07.874468 | orchestrator | Friday 27 March 2026 00:36:52 +0000 (0:00:02.314) 0:00:05.437 ********** 2026-03-27 00:37:07.874481 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-27 00:37:07.874497 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-27 00:37:07.874509 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-27 00:37:07.874521 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-27 00:37:07.874534 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-27 00:37:07.874547 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-27 00:37:07.874559 | orchestrator | 2026-03-27 00:37:07.874572 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-27 00:37:07.874584 | orchestrator | Friday 27 March 2026 00:36:53 +0000 (0:00:01.257) 0:00:06.694 ********** 2026-03-27 00:37:07.874597 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:37:07.874609 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:37:07.874622 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:37:07.874633 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:37:07.874644 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:37:07.874655 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:37:07.874665 | orchestrator | 2026-03-27 00:37:07.874676 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-27 00:37:07.874687 | orchestrator | Friday 27 March 2026 00:36:57 +0000 (0:00:03.982) 0:00:10.677 ********** 2026-03-27 00:37:07.874698 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:37:07.874708 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:37:07.874733 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:37:07.874744 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:37:07.874754 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:37:07.874765 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:37:07.874776 | orchestrator | 2026-03-27 00:37:07.874787 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-27 00:37:07.874797 | orchestrator | 2026-03-27 00:37:07.874808 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-27 00:37:07.874819 | orchestrator | Friday 27 March 2026 00:36:58 +0000 (0:00:00.488) 0:00:11.165 ********** 2026-03-27 00:37:07.874841 | orchestrator | changed: [testbed-manager] 2026-03-27 00:37:07.874852 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:37:07.874862 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:37:07.874873 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:37:07.874884 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:37:07.874894 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:37:07.874905 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:37:07.874916 | orchestrator | 2026-03-27 00:37:07.874926 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-27 00:37:07.874937 | orchestrator | Friday 27 March 2026 00:36:59 +0000 (0:00:01.722) 0:00:12.888 ********** 2026-03-27 00:37:07.874948 | orchestrator | changed: [testbed-manager] 2026-03-27 00:37:07.874959 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:37:07.874971 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:37:07.874989 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:37:07.875011 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:37:07.875039 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:37:07.875081 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:37:07.875100 | orchestrator | 2026-03-27 00:37:07.875118 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-27 00:37:07.875137 | orchestrator | Friday 27 March 2026 00:37:01 +0000 (0:00:01.485) 0:00:14.374 ********** 2026-03-27 00:37:07.875156 | orchestrator | ok: [testbed-manager] 2026-03-27 00:37:07.875174 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:37:07.875190 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:37:07.875200 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:37:07.875211 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:37:07.875222 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:37:07.875233 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:37:07.875243 | orchestrator | 2026-03-27 00:37:07.875254 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-27 00:37:07.875265 | orchestrator | Friday 27 March 2026 00:37:02 +0000 (0:00:01.531) 0:00:15.905 ********** 2026-03-27 00:37:07.875276 | orchestrator | changed: [testbed-manager] 2026-03-27 00:37:07.875286 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:37:07.875297 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:37:07.875307 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:37:07.875318 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:37:07.875329 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:37:07.875340 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:37:07.875350 | orchestrator | 2026-03-27 00:37:07.875361 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-27 00:37:07.875372 | orchestrator | Friday 27 March 2026 00:37:04 +0000 (0:00:01.635) 0:00:17.541 ********** 2026-03-27 00:37:07.875383 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:37:07.875468 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:37:07.875498 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:37:07.875517 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:37:07.875535 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:37:07.875553 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:37:07.875572 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:37:07.875584 | orchestrator | 2026-03-27 00:37:07.875595 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-27 00:37:07.875606 | orchestrator | 2026-03-27 00:37:07.875617 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-27 00:37:07.875627 | orchestrator | Friday 27 March 2026 00:37:04 +0000 (0:00:00.614) 0:00:18.155 ********** 2026-03-27 00:37:07.875638 | orchestrator | ok: [testbed-manager] 2026-03-27 00:37:07.875648 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:37:07.875659 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:37:07.875670 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:37:07.875680 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:37:07.875691 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:37:07.875701 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:37:07.875723 | orchestrator | 2026-03-27 00:37:07.875734 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:37:07.875747 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:37:07.875760 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:07.875771 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:07.875782 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:07.875793 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:07.875803 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:07.875814 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:07.875825 | orchestrator | 2026-03-27 00:37:07.875836 | orchestrator | 2026-03-27 00:37:07.875846 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:37:07.875867 | orchestrator | Friday 27 March 2026 00:37:07 +0000 (0:00:02.855) 0:00:21.011 ********** 2026-03-27 00:37:07.875878 | orchestrator | =============================================================================== 2026-03-27 00:37:07.875889 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.98s 2026-03-27 00:37:07.875899 | orchestrator | Install python3-docker -------------------------------------------------- 2.86s 2026-03-27 00:37:07.875910 | orchestrator | Apply netplan configuration --------------------------------------------- 2.34s 2026-03-27 00:37:07.875921 | orchestrator | Apply netplan configuration --------------------------------------------- 2.31s 2026-03-27 00:37:07.875931 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2026-03-27 00:37:07.875942 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.64s 2026-03-27 00:37:07.875953 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.53s 2026-03-27 00:37:07.875963 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.49s 2026-03-27 00:37:07.875974 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.26s 2026-03-27 00:37:07.875984 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2026-03-27 00:37:07.875995 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.61s 2026-03-27 00:37:07.876016 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.49s 2026-03-27 00:37:08.170353 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-27 00:37:19.356293 | orchestrator | 2026-03-27 00:37:19 | INFO  | Prepare task for execution of reboot. 2026-03-27 00:37:19.424366 | orchestrator | 2026-03-27 00:37:19 | INFO  | Task e0718baf-14fc-48c2-8395-0cf0576e1967 (reboot) was prepared for execution. 2026-03-27 00:37:19.424515 | orchestrator | 2026-03-27 00:37:19 | INFO  | It takes a moment until task e0718baf-14fc-48c2-8395-0cf0576e1967 (reboot) has been started and output is visible here. 2026-03-27 00:37:30.642270 | orchestrator | 2026-03-27 00:37:30.642364 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-27 00:37:30.642433 | orchestrator | 2026-03-27 00:37:30.642446 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-27 00:37:30.642457 | orchestrator | Friday 27 March 2026 00:37:22 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-03-27 00:37:30.642492 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:37:30.642504 | orchestrator | 2026-03-27 00:37:30.642515 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-27 00:37:30.642526 | orchestrator | Friday 27 March 2026 00:37:22 +0000 (0:00:00.160) 0:00:00.398 ********** 2026-03-27 00:37:30.642537 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:37:30.642547 | orchestrator | 2026-03-27 00:37:30.642558 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-27 00:37:30.642569 | orchestrator | Friday 27 March 2026 00:37:24 +0000 (0:00:01.245) 0:00:01.643 ********** 2026-03-27 00:37:30.642579 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:37:30.642590 | orchestrator | 2026-03-27 00:37:30.642601 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-27 00:37:30.642612 | orchestrator | 2026-03-27 00:37:30.642623 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-27 00:37:30.642633 | orchestrator | Friday 27 March 2026 00:37:24 +0000 (0:00:00.117) 0:00:01.760 ********** 2026-03-27 00:37:30.642644 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:37:30.642655 | orchestrator | 2026-03-27 00:37:30.642665 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-27 00:37:30.642676 | orchestrator | Friday 27 March 2026 00:37:24 +0000 (0:00:00.085) 0:00:01.845 ********** 2026-03-27 00:37:30.642687 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:37:30.642697 | orchestrator | 2026-03-27 00:37:30.642708 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-27 00:37:30.642719 | orchestrator | Friday 27 March 2026 00:37:25 +0000 (0:00:01.043) 0:00:02.889 ********** 2026-03-27 00:37:30.642730 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:37:30.642741 | orchestrator | 2026-03-27 00:37:30.642752 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-27 00:37:30.642763 | orchestrator | 2026-03-27 00:37:30.642774 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-27 00:37:30.642784 | orchestrator | Friday 27 March 2026 00:37:25 +0000 (0:00:00.108) 0:00:02.998 ********** 2026-03-27 00:37:30.642795 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:37:30.642806 | orchestrator | 2026-03-27 00:37:30.642816 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-27 00:37:30.642829 | orchestrator | Friday 27 March 2026 00:37:25 +0000 (0:00:00.095) 0:00:03.094 ********** 2026-03-27 00:37:30.642842 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:37:30.642854 | orchestrator | 2026-03-27 00:37:30.642867 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-27 00:37:30.642879 | orchestrator | Friday 27 March 2026 00:37:26 +0000 (0:00:01.059) 0:00:04.153 ********** 2026-03-27 00:37:30.642892 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:37:30.642904 | orchestrator | 2026-03-27 00:37:30.642917 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-27 00:37:30.642929 | orchestrator | 2026-03-27 00:37:30.642941 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-27 00:37:30.642953 | orchestrator | Friday 27 March 2026 00:37:26 +0000 (0:00:00.104) 0:00:04.258 ********** 2026-03-27 00:37:30.642965 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:37:30.642977 | orchestrator | 2026-03-27 00:37:30.642989 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-27 00:37:30.643001 | orchestrator | Friday 27 March 2026 00:37:26 +0000 (0:00:00.087) 0:00:04.345 ********** 2026-03-27 00:37:30.643025 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:37:30.643038 | orchestrator | 2026-03-27 00:37:30.643050 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-27 00:37:30.643062 | orchestrator | Friday 27 March 2026 00:37:27 +0000 (0:00:01.022) 0:00:05.368 ********** 2026-03-27 00:37:30.643075 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:37:30.643087 | orchestrator | 2026-03-27 00:37:30.643100 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-27 00:37:30.643121 | orchestrator | 2026-03-27 00:37:30.643134 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-27 00:37:30.643147 | orchestrator | Friday 27 March 2026 00:37:27 +0000 (0:00:00.118) 0:00:05.486 ********** 2026-03-27 00:37:30.643160 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:37:30.643173 | orchestrator | 2026-03-27 00:37:30.643183 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-27 00:37:30.643194 | orchestrator | Friday 27 March 2026 00:37:27 +0000 (0:00:00.098) 0:00:05.585 ********** 2026-03-27 00:37:30.643205 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:37:30.643215 | orchestrator | 2026-03-27 00:37:30.643226 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-27 00:37:30.643236 | orchestrator | Friday 27 March 2026 00:37:29 +0000 (0:00:01.160) 0:00:06.746 ********** 2026-03-27 00:37:30.643247 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:37:30.643257 | orchestrator | 2026-03-27 00:37:30.643268 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-27 00:37:30.643278 | orchestrator | 2026-03-27 00:37:30.643289 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-27 00:37:30.643300 | orchestrator | Friday 27 March 2026 00:37:29 +0000 (0:00:00.106) 0:00:06.853 ********** 2026-03-27 00:37:30.643310 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:37:30.643321 | orchestrator | 2026-03-27 00:37:30.643331 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-27 00:37:30.643342 | orchestrator | Friday 27 March 2026 00:37:29 +0000 (0:00:00.102) 0:00:06.956 ********** 2026-03-27 00:37:30.643352 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:37:30.643363 | orchestrator | 2026-03-27 00:37:30.643391 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-27 00:37:30.643403 | orchestrator | Friday 27 March 2026 00:37:30 +0000 (0:00:01.037) 0:00:07.994 ********** 2026-03-27 00:37:30.643428 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:37:30.643440 | orchestrator | 2026-03-27 00:37:30.643451 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:37:30.643462 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:30.643474 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:30.643484 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:30.643495 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:30.643506 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:30.643516 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:37:30.643527 | orchestrator | 2026-03-27 00:37:30.643538 | orchestrator | 2026-03-27 00:37:30.643548 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:37:30.643559 | orchestrator | Friday 27 March 2026 00:37:30 +0000 (0:00:00.036) 0:00:08.030 ********** 2026-03-27 00:37:30.643570 | orchestrator | =============================================================================== 2026-03-27 00:37:30.643580 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.57s 2026-03-27 00:37:30.643591 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2026-03-27 00:37:30.643602 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2026-03-27 00:37:30.813710 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-27 00:37:42.192940 | orchestrator | 2026-03-27 00:37:42 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-27 00:37:42.267653 | orchestrator | 2026-03-27 00:37:42 | INFO  | Task c59e8ddd-0e55-42be-95d1-070698e340aa (wait-for-connection) was prepared for execution. 2026-03-27 00:37:42.267732 | orchestrator | 2026-03-27 00:37:42 | INFO  | It takes a moment until task c59e8ddd-0e55-42be-95d1-070698e340aa (wait-for-connection) has been started and output is visible here. 2026-03-27 00:37:56.984451 | orchestrator | 2026-03-27 00:37:56.984574 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-27 00:37:56.984595 | orchestrator | 2026-03-27 00:37:56.984612 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-27 00:37:56.984628 | orchestrator | Friday 27 March 2026 00:37:45 +0000 (0:00:00.274) 0:00:00.274 ********** 2026-03-27 00:37:56.984643 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:37:56.984659 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:37:56.984675 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:37:56.984691 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:37:56.984707 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:37:56.984722 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:37:56.984737 | orchestrator | 2026-03-27 00:37:56.984773 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:37:56.984791 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:37:56.984809 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:37:56.984826 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:37:56.984842 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:37:56.984856 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:37:56.984871 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:37:56.984887 | orchestrator | 2026-03-27 00:37:56.984902 | orchestrator | 2026-03-27 00:37:56.984918 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:37:56.984935 | orchestrator | Friday 27 March 2026 00:37:56 +0000 (0:00:11.473) 0:00:11.747 ********** 2026-03-27 00:37:56.984951 | orchestrator | =============================================================================== 2026-03-27 00:37:56.984968 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.47s 2026-03-27 00:37:57.140727 | orchestrator | + osism apply hddtemp 2026-03-27 00:38:08.435867 | orchestrator | 2026-03-27 00:38:08 | INFO  | Prepare task for execution of hddtemp. 2026-03-27 00:38:08.509826 | orchestrator | 2026-03-27 00:38:08 | INFO  | Task 736a0d55-c1f8-47e5-94d9-90c891901216 (hddtemp) was prepared for execution. 2026-03-27 00:38:08.509916 | orchestrator | 2026-03-27 00:38:08 | INFO  | It takes a moment until task 736a0d55-c1f8-47e5-94d9-90c891901216 (hddtemp) has been started and output is visible here. 2026-03-27 00:38:35.316622 | orchestrator | 2026-03-27 00:38:35.316744 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-27 00:38:35.316765 | orchestrator | 2026-03-27 00:38:35.316780 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-27 00:38:35.316795 | orchestrator | Friday 27 March 2026 00:38:11 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-03-27 00:38:35.316809 | orchestrator | ok: [testbed-manager] 2026-03-27 00:38:35.316853 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:38:35.316867 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:38:35.316881 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:38:35.316895 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:38:35.316908 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:38:35.316923 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:38:35.316937 | orchestrator | 2026-03-27 00:38:35.316953 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-27 00:38:35.316968 | orchestrator | Friday 27 March 2026 00:38:12 +0000 (0:00:00.535) 0:00:00.820 ********** 2026-03-27 00:38:35.316985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:38:35.317001 | orchestrator | 2026-03-27 00:38:35.317015 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-27 00:38:35.317028 | orchestrator | Friday 27 March 2026 00:38:13 +0000 (0:00:00.875) 0:00:01.695 ********** 2026-03-27 00:38:35.317042 | orchestrator | ok: [testbed-manager] 2026-03-27 00:38:35.317056 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:38:35.317069 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:38:35.317083 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:38:35.317097 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:38:35.317111 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:38:35.317128 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:38:35.317141 | orchestrator | 2026-03-27 00:38:35.317157 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-27 00:38:35.317171 | orchestrator | Friday 27 March 2026 00:38:15 +0000 (0:00:02.416) 0:00:04.112 ********** 2026-03-27 00:38:35.317184 | orchestrator | changed: [testbed-manager] 2026-03-27 00:38:35.317199 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:38:35.317215 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:38:35.317230 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:38:35.317244 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:38:35.317257 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:38:35.317271 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:38:35.317315 | orchestrator | 2026-03-27 00:38:35.317332 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-27 00:38:35.317347 | orchestrator | Friday 27 March 2026 00:38:16 +0000 (0:00:00.882) 0:00:04.994 ********** 2026-03-27 00:38:35.317362 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:38:35.317375 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:38:35.317389 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:38:35.317403 | orchestrator | ok: [testbed-manager] 2026-03-27 00:38:35.317418 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:38:35.317432 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:38:35.317445 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:38:35.317458 | orchestrator | 2026-03-27 00:38:35.317472 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-27 00:38:35.317487 | orchestrator | Friday 27 March 2026 00:38:17 +0000 (0:00:01.227) 0:00:06.222 ********** 2026-03-27 00:38:35.317501 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:38:35.317513 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:38:35.317526 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:38:35.317539 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:38:35.317570 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:38:35.317586 | orchestrator | changed: [testbed-manager] 2026-03-27 00:38:35.317599 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:38:35.317613 | orchestrator | 2026-03-27 00:38:35.317627 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-27 00:38:35.317641 | orchestrator | Friday 27 March 2026 00:38:18 +0000 (0:00:00.545) 0:00:06.767 ********** 2026-03-27 00:38:35.317654 | orchestrator | changed: [testbed-manager] 2026-03-27 00:38:35.317667 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:38:35.317679 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:38:35.317708 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:38:35.317721 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:38:35.317734 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:38:35.317746 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:38:35.317759 | orchestrator | 2026-03-27 00:38:35.317773 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-27 00:38:35.317786 | orchestrator | Friday 27 March 2026 00:38:32 +0000 (0:00:13.886) 0:00:20.654 ********** 2026-03-27 00:38:35.317801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:38:35.317816 | orchestrator | 2026-03-27 00:38:35.317830 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-27 00:38:35.317843 | orchestrator | Friday 27 March 2026 00:38:33 +0000 (0:00:01.094) 0:00:21.749 ********** 2026-03-27 00:38:35.317856 | orchestrator | changed: [testbed-manager] 2026-03-27 00:38:35.317868 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:38:35.317880 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:38:35.317893 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:38:35.317906 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:38:35.317919 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:38:35.317932 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:38:35.317946 | orchestrator | 2026-03-27 00:38:35.317958 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:38:35.317972 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:38:35.318014 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:38:35.318110 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:38:35.318125 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:38:35.318138 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:38:35.318151 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:38:35.318165 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:38:35.318178 | orchestrator | 2026-03-27 00:38:35.318192 | orchestrator | 2026-03-27 00:38:35.318204 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:38:35.318218 | orchestrator | Friday 27 March 2026 00:38:35 +0000 (0:00:01.887) 0:00:23.636 ********** 2026-03-27 00:38:35.318231 | orchestrator | =============================================================================== 2026-03-27 00:38:35.318244 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.89s 2026-03-27 00:38:35.318258 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.42s 2026-03-27 00:38:35.318271 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.89s 2026-03-27 00:38:35.318306 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.23s 2026-03-27 00:38:35.318319 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.09s 2026-03-27 00:38:35.318332 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.88s 2026-03-27 00:38:35.318343 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.88s 2026-03-27 00:38:35.318380 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.55s 2026-03-27 00:38:35.318393 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.54s 2026-03-27 00:38:35.481821 | orchestrator | ++ semver latest 7.1.1 2026-03-27 00:38:35.524457 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-27 00:38:35.524527 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-27 00:38:35.524535 | orchestrator | + sudo systemctl restart manager.service 2026-03-27 00:38:48.861970 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-27 00:38:48.862163 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-27 00:38:48.862185 | orchestrator | + local max_attempts=60 2026-03-27 00:38:48.862199 | orchestrator | + local name=ceph-ansible 2026-03-27 00:38:48.862211 | orchestrator | + local attempt_num=1 2026-03-27 00:38:48.862223 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:38:48.885077 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:38:48.885166 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:38:48.885180 | orchestrator | + sleep 5 2026-03-27 00:38:53.887597 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:38:53.908168 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:38:53.908314 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:38:53.908365 | orchestrator | + sleep 5 2026-03-27 00:38:58.909117 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:38:58.948088 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:38:58.948178 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:38:58.948193 | orchestrator | + sleep 5 2026-03-27 00:39:03.952080 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:03.984341 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:03.984410 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:03.984416 | orchestrator | + sleep 5 2026-03-27 00:39:08.989120 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:09.022125 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:09.022187 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:09.022193 | orchestrator | + sleep 5 2026-03-27 00:39:14.026407 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:14.062515 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:14.062596 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:14.062610 | orchestrator | + sleep 5 2026-03-27 00:39:19.065799 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:19.096563 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:19.096652 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:19.096666 | orchestrator | + sleep 5 2026-03-27 00:39:24.102751 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:24.138218 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:24.138322 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:24.138335 | orchestrator | + sleep 5 2026-03-27 00:39:29.141064 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:29.176084 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:29.176160 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:29.176171 | orchestrator | + sleep 5 2026-03-27 00:39:34.181373 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:34.218379 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:34.218468 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:34.218484 | orchestrator | + sleep 5 2026-03-27 00:39:39.222638 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:39.261459 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:39.261546 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:39.261561 | orchestrator | + sleep 5 2026-03-27 00:39:44.265652 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:44.300698 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:44.300778 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:44.300792 | orchestrator | + sleep 5 2026-03-27 00:39:49.304662 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:49.340298 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:49.340387 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-27 00:39:49.340402 | orchestrator | + sleep 5 2026-03-27 00:39:54.345119 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-27 00:39:54.378394 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:54.378487 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-27 00:39:54.378502 | orchestrator | + local max_attempts=60 2026-03-27 00:39:54.378515 | orchestrator | + local name=kolla-ansible 2026-03-27 00:39:54.378528 | orchestrator | + local attempt_num=1 2026-03-27 00:39:54.378863 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-27 00:39:54.413423 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:54.413667 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-27 00:39:54.413695 | orchestrator | + local max_attempts=60 2026-03-27 00:39:54.413718 | orchestrator | + local name=osism-ansible 2026-03-27 00:39:54.413737 | orchestrator | + local attempt_num=1 2026-03-27 00:39:54.413773 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-27 00:39:54.441413 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-27 00:39:54.441505 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-27 00:39:54.441521 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-27 00:39:54.589364 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-27 00:39:54.725613 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-27 00:39:54.858345 | orchestrator | ARA in osism-ansible already disabled. 2026-03-27 00:39:54.976205 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-27 00:39:54.976720 | orchestrator | + osism apply gather-facts 2026-03-27 00:40:06.256831 | orchestrator | 2026-03-27 00:40:06 | INFO  | Prepare task for execution of gather-facts. 2026-03-27 00:40:06.328376 | orchestrator | 2026-03-27 00:40:06 | INFO  | Task bdba6839-d548-42af-8313-740846fe6bc6 (gather-facts) was prepared for execution. 2026-03-27 00:40:06.328513 | orchestrator | 2026-03-27 00:40:06 | INFO  | It takes a moment until task bdba6839-d548-42af-8313-740846fe6bc6 (gather-facts) has been started and output is visible here. 2026-03-27 00:40:18.419003 | orchestrator | 2026-03-27 00:40:18.419132 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-27 00:40:18.419265 | orchestrator | 2026-03-27 00:40:18.419287 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-27 00:40:18.419304 | orchestrator | Friday 27 March 2026 00:40:09 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-27 00:40:18.419319 | orchestrator | ok: [testbed-manager] 2026-03-27 00:40:18.419334 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:40:18.419349 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:40:18.419364 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:40:18.419378 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:40:18.419392 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:40:18.419406 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:40:18.419420 | orchestrator | 2026-03-27 00:40:18.419433 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-27 00:40:18.419448 | orchestrator | 2026-03-27 00:40:18.419463 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-27 00:40:18.419477 | orchestrator | Friday 27 March 2026 00:40:17 +0000 (0:00:08.303) 0:00:08.560 ********** 2026-03-27 00:40:18.419492 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:40:18.419506 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:40:18.419521 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:40:18.419535 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:40:18.419549 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:40:18.419563 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:40:18.419577 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:40:18.419591 | orchestrator | 2026-03-27 00:40:18.419607 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:40:18.419623 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:40:18.419673 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:40:18.419684 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:40:18.419694 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:40:18.419704 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:40:18.419714 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:40:18.419723 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 00:40:18.419733 | orchestrator | 2026-03-27 00:40:18.419742 | orchestrator | 2026-03-27 00:40:18.419752 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:40:18.419761 | orchestrator | Friday 27 March 2026 00:40:18 +0000 (0:00:00.630) 0:00:09.191 ********** 2026-03-27 00:40:18.419771 | orchestrator | =============================================================================== 2026-03-27 00:40:18.419780 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.30s 2026-03-27 00:40:18.419790 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-03-27 00:40:18.577617 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-27 00:40:18.586393 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-27 00:40:18.595412 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-27 00:40:18.613632 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-27 00:40:18.623333 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-27 00:40:18.637914 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-27 00:40:18.648833 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-27 00:40:18.663015 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-27 00:40:18.675357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-27 00:40:18.691632 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-27 00:40:18.707363 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-27 00:40:18.724849 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-27 00:40:18.743038 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-27 00:40:18.764146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-27 00:40:18.782323 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-27 00:40:18.799982 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-27 00:40:18.811417 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-27 00:40:18.827372 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-27 00:40:18.841191 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-27 00:40:18.858794 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-27 00:40:18.876648 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-27 00:40:18.889081 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-27 00:40:18.906592 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-27 00:40:18.925513 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-27 00:40:19.235060 | orchestrator | ok: Runtime: 0:23:31.039794 2026-03-27 00:40:19.338095 | 2026-03-27 00:40:19.338228 | TASK [Deploy services] 2026-03-27 00:40:19.871970 | orchestrator | skipping: Conditional result was False 2026-03-27 00:40:19.887590 | 2026-03-27 00:40:19.887748 | TASK [Deploy in a nutshell] 2026-03-27 00:40:20.655059 | orchestrator | + set -e 2026-03-27 00:40:20.656472 | orchestrator | 2026-03-27 00:40:20.656537 | orchestrator | # PULL IMAGES 2026-03-27 00:40:20.656607 | orchestrator | 2026-03-27 00:40:20.656630 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-27 00:40:20.656652 | orchestrator | ++ export INTERACTIVE=false 2026-03-27 00:40:20.656667 | orchestrator | ++ INTERACTIVE=false 2026-03-27 00:40:20.656711 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-27 00:40:20.656734 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-27 00:40:20.656749 | orchestrator | + source /opt/manager-vars.sh 2026-03-27 00:40:20.656760 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-27 00:40:20.656778 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-27 00:40:20.656790 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-27 00:40:20.656808 | orchestrator | ++ CEPH_VERSION=reef 2026-03-27 00:40:20.656819 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-27 00:40:20.656838 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-27 00:40:20.656848 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-27 00:40:20.656863 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-27 00:40:20.656874 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-27 00:40:20.656886 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-27 00:40:20.656897 | orchestrator | ++ export ARA=false 2026-03-27 00:40:20.656908 | orchestrator | ++ ARA=false 2026-03-27 00:40:20.656919 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-27 00:40:20.656929 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-27 00:40:20.656940 | orchestrator | ++ export TEMPEST=true 2026-03-27 00:40:20.656951 | orchestrator | ++ TEMPEST=true 2026-03-27 00:40:20.656961 | orchestrator | ++ export IS_ZUUL=true 2026-03-27 00:40:20.656972 | orchestrator | ++ IS_ZUUL=true 2026-03-27 00:40:20.656983 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 00:40:20.656994 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 00:40:20.657004 | orchestrator | ++ export EXTERNAL_API=false 2026-03-27 00:40:20.657015 | orchestrator | ++ EXTERNAL_API=false 2026-03-27 00:40:20.657026 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-27 00:40:20.657037 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-27 00:40:20.657048 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-27 00:40:20.657058 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-27 00:40:20.657069 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-27 00:40:20.657080 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-27 00:40:20.657090 | orchestrator | + echo 2026-03-27 00:40:20.657101 | orchestrator | + echo '# PULL IMAGES' 2026-03-27 00:40:20.657127 | orchestrator | + echo 2026-03-27 00:40:20.657185 | orchestrator | ++ semver latest 7.0.0 2026-03-27 00:40:20.712259 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-27 00:40:20.712345 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-27 00:40:20.712359 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-27 00:40:21.859101 | orchestrator | 2026-03-27 00:40:21 | INFO  | Trying to run play pull-images in environment custom 2026-03-27 00:40:31.897983 | orchestrator | 2026-03-27 00:40:31 | INFO  | Prepare task for execution of pull-images. 2026-03-27 00:40:31.959517 | orchestrator | 2026-03-27 00:40:31 | INFO  | Task 93e14a94-3a3a-4cf5-ab25-0edfffa9a98f (pull-images) was prepared for execution. 2026-03-27 00:40:31.959601 | orchestrator | 2026-03-27 00:40:31 | INFO  | Task 93e14a94-3a3a-4cf5-ab25-0edfffa9a98f is running in background. No more output. Check ARA for logs. 2026-03-27 00:40:33.242794 | orchestrator | 2026-03-27 00:40:33 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-27 00:40:43.346289 | orchestrator | 2026-03-27 00:40:43 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-27 00:40:43.415229 | orchestrator | 2026-03-27 00:40:43 | INFO  | Task ac4bc0f4-2f5a-4d6f-b4c9-2dbe4e97a961 (wipe-partitions) was prepared for execution. 2026-03-27 00:40:43.415316 | orchestrator | 2026-03-27 00:40:43 | INFO  | It takes a moment until task ac4bc0f4-2f5a-4d6f-b4c9-2dbe4e97a961 (wipe-partitions) has been started and output is visible here. 2026-03-27 00:40:55.151240 | orchestrator | 2026-03-27 00:40:55.151384 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-27 00:40:55.151404 | orchestrator | 2026-03-27 00:40:55.151416 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-27 00:40:55.151432 | orchestrator | Friday 27 March 2026 00:40:45 +0000 (0:00:00.122) 0:00:00.122 ********** 2026-03-27 00:40:55.151483 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:40:55.151497 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:40:55.151508 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:40:55.151519 | orchestrator | 2026-03-27 00:40:55.151530 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-27 00:40:55.151541 | orchestrator | Friday 27 March 2026 00:40:47 +0000 (0:00:01.831) 0:00:01.953 ********** 2026-03-27 00:40:55.151557 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:40:55.151568 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:40:55.151579 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:40:55.151589 | orchestrator | 2026-03-27 00:40:55.151600 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-27 00:40:55.151611 | orchestrator | Friday 27 March 2026 00:40:47 +0000 (0:00:00.217) 0:00:02.170 ********** 2026-03-27 00:40:55.151622 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:40:55.151633 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:40:55.151644 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:40:55.151654 | orchestrator | 2026-03-27 00:40:55.151665 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-27 00:40:55.151676 | orchestrator | Friday 27 March 2026 00:40:48 +0000 (0:00:00.549) 0:00:02.719 ********** 2026-03-27 00:40:55.151687 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:40:55.151699 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:40:55.151711 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:40:55.151725 | orchestrator | 2026-03-27 00:40:55.151737 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-27 00:40:55.151750 | orchestrator | Friday 27 March 2026 00:40:48 +0000 (0:00:00.227) 0:00:02.947 ********** 2026-03-27 00:40:55.151762 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-27 00:40:55.151778 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-27 00:40:55.151791 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-27 00:40:55.151803 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-27 00:40:55.151816 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-27 00:40:55.151828 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-27 00:40:55.151840 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-27 00:40:55.151853 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-27 00:40:55.151865 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-27 00:40:55.151877 | orchestrator | 2026-03-27 00:40:55.151890 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-27 00:40:55.151903 | orchestrator | Friday 27 March 2026 00:40:49 +0000 (0:00:01.311) 0:00:04.258 ********** 2026-03-27 00:40:55.151917 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-27 00:40:55.151930 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-27 00:40:55.151942 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-27 00:40:55.151954 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-27 00:40:55.151967 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-27 00:40:55.151978 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-27 00:40:55.151991 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-27 00:40:55.152003 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-27 00:40:55.152015 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-27 00:40:55.152027 | orchestrator | 2026-03-27 00:40:55.152039 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-27 00:40:55.152051 | orchestrator | Friday 27 March 2026 00:40:51 +0000 (0:00:01.421) 0:00:05.680 ********** 2026-03-27 00:40:55.152062 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-27 00:40:55.152073 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-27 00:40:55.152083 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-27 00:40:55.152100 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-27 00:40:55.152144 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-27 00:40:55.152157 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-27 00:40:55.152167 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-27 00:40:55.152193 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-27 00:40:55.152216 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-27 00:40:55.152227 | orchestrator | 2026-03-27 00:40:55.152238 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-27 00:40:55.152249 | orchestrator | Friday 27 March 2026 00:40:53 +0000 (0:00:02.198) 0:00:07.878 ********** 2026-03-27 00:40:55.152260 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:40:55.152271 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:40:55.152281 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:40:55.152292 | orchestrator | 2026-03-27 00:40:55.152303 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-27 00:40:55.152323 | orchestrator | Friday 27 March 2026 00:40:54 +0000 (0:00:00.622) 0:00:08.500 ********** 2026-03-27 00:40:55.152334 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:40:55.152345 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:40:55.152355 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:40:55.152367 | orchestrator | 2026-03-27 00:40:55.152378 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:40:55.152390 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:40:55.152401 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:40:55.152431 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:40:55.152442 | orchestrator | 2026-03-27 00:40:55.152453 | orchestrator | 2026-03-27 00:40:55.152464 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:40:55.152474 | orchestrator | Friday 27 March 2026 00:40:54 +0000 (0:00:00.784) 0:00:09.285 ********** 2026-03-27 00:40:55.152485 | orchestrator | =============================================================================== 2026-03-27 00:40:55.152496 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.20s 2026-03-27 00:40:55.152506 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.83s 2026-03-27 00:40:55.152517 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.42s 2026-03-27 00:40:55.152527 | orchestrator | Check device availability ----------------------------------------------- 1.31s 2026-03-27 00:40:55.152538 | orchestrator | Request device events from the kernel ----------------------------------- 0.78s 2026-03-27 00:40:55.152549 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-03-27 00:40:55.152559 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-03-27 00:40:55.152570 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2026-03-27 00:40:55.152581 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2026-03-27 00:41:06.596720 | orchestrator | 2026-03-27 00:41:06 | INFO  | Prepare task for execution of facts. 2026-03-27 00:41:06.661774 | orchestrator | 2026-03-27 00:41:06 | INFO  | Task e814e401-27e0-4688-8834-07b3f7970505 (facts) was prepared for execution. 2026-03-27 00:41:06.661888 | orchestrator | 2026-03-27 00:41:06 | INFO  | It takes a moment until task e814e401-27e0-4688-8834-07b3f7970505 (facts) has been started and output is visible here. 2026-03-27 00:41:17.453809 | orchestrator | 2026-03-27 00:41:17.453932 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-27 00:41:17.453950 | orchestrator | 2026-03-27 00:41:17.453982 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-27 00:41:17.453994 | orchestrator | Friday 27 March 2026 00:41:09 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-03-27 00:41:17.454006 | orchestrator | ok: [testbed-manager] 2026-03-27 00:41:17.454086 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:41:17.454124 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:41:17.454134 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:41:17.454145 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:41:17.454155 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:41:17.454165 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:41:17.454175 | orchestrator | 2026-03-27 00:41:17.454197 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-27 00:41:17.454208 | orchestrator | Friday 27 March 2026 00:41:10 +0000 (0:00:01.302) 0:00:01.607 ********** 2026-03-27 00:41:17.454218 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:41:17.454229 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:41:17.454238 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:41:17.454248 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:41:17.454258 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:17.454268 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:17.454278 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:17.454288 | orchestrator | 2026-03-27 00:41:17.454298 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-27 00:41:17.454307 | orchestrator | 2026-03-27 00:41:17.454318 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-27 00:41:17.454329 | orchestrator | Friday 27 March 2026 00:41:11 +0000 (0:00:01.034) 0:00:02.642 ********** 2026-03-27 00:41:17.454339 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:41:17.454350 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:41:17.454361 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:41:17.454371 | orchestrator | ok: [testbed-manager] 2026-03-27 00:41:17.454382 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:41:17.454392 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:41:17.454402 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:41:17.454413 | orchestrator | 2026-03-27 00:41:17.454423 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-27 00:41:17.454435 | orchestrator | 2026-03-27 00:41:17.454446 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-27 00:41:17.454456 | orchestrator | Friday 27 March 2026 00:41:16 +0000 (0:00:04.832) 0:00:07.474 ********** 2026-03-27 00:41:17.454463 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:41:17.454470 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:41:17.454477 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:41:17.454484 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:41:17.454491 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:17.454499 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:17.454506 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:17.454513 | orchestrator | 2026-03-27 00:41:17.454523 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:41:17.454533 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:41:17.454545 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:41:17.454555 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:41:17.454565 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:41:17.454577 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:41:17.454598 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:41:17.454606 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:41:17.454613 | orchestrator | 2026-03-27 00:41:17.454620 | orchestrator | 2026-03-27 00:41:17.454627 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:41:17.454635 | orchestrator | Friday 27 March 2026 00:41:17 +0000 (0:00:00.454) 0:00:07.929 ********** 2026-03-27 00:41:17.454642 | orchestrator | =============================================================================== 2026-03-27 00:41:17.454649 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.83s 2026-03-27 00:41:17.454656 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2026-03-27 00:41:17.454663 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.03s 2026-03-27 00:41:17.454670 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-03-27 00:41:18.879505 | orchestrator | 2026-03-27 00:41:18 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-27 00:41:18.933616 | orchestrator | 2026-03-27 00:41:18 | INFO  | Task 5d3e020a-54dd-41fd-a958-725aeaeb5fa6 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-27 00:41:18.933718 | orchestrator | 2026-03-27 00:41:18 | INFO  | It takes a moment until task 5d3e020a-54dd-41fd-a958-725aeaeb5fa6 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-27 00:41:29.639192 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-27 00:41:29.639287 | orchestrator | 2.16.14 2026-03-27 00:41:29.639301 | orchestrator | 2026-03-27 00:41:29.639323 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-27 00:41:29.639335 | orchestrator | 2026-03-27 00:41:29.639347 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-27 00:41:29.639358 | orchestrator | Friday 27 March 2026 00:41:23 +0000 (0:00:00.262) 0:00:00.262 ********** 2026-03-27 00:41:29.639369 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 00:41:29.639380 | orchestrator | 2026-03-27 00:41:29.639390 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-27 00:41:29.639401 | orchestrator | Friday 27 March 2026 00:41:23 +0000 (0:00:00.200) 0:00:00.463 ********** 2026-03-27 00:41:29.639412 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:41:29.639423 | orchestrator | 2026-03-27 00:41:29.639434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.639445 | orchestrator | Friday 27 March 2026 00:41:23 +0000 (0:00:00.218) 0:00:00.681 ********** 2026-03-27 00:41:29.639455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-27 00:41:29.639466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-27 00:41:29.639477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-27 00:41:29.639487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-27 00:41:29.639498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-27 00:41:29.639508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-27 00:41:29.639519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-27 00:41:29.639530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-27 00:41:29.639540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-27 00:41:29.639551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-27 00:41:29.639582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-27 00:41:29.639594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-27 00:41:29.639604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-27 00:41:29.639615 | orchestrator | 2026-03-27 00:41:29.639625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.639636 | orchestrator | Friday 27 March 2026 00:41:23 +0000 (0:00:00.355) 0:00:01.037 ********** 2026-03-27 00:41:29.639646 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.639657 | orchestrator | 2026-03-27 00:41:29.639668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.639679 | orchestrator | Friday 27 March 2026 00:41:24 +0000 (0:00:00.406) 0:00:01.443 ********** 2026-03-27 00:41:29.639691 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.639703 | orchestrator | 2026-03-27 00:41:29.639716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.639733 | orchestrator | Friday 27 March 2026 00:41:24 +0000 (0:00:00.196) 0:00:01.640 ********** 2026-03-27 00:41:29.639745 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.639758 | orchestrator | 2026-03-27 00:41:29.639770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.639782 | orchestrator | Friday 27 March 2026 00:41:24 +0000 (0:00:00.167) 0:00:01.807 ********** 2026-03-27 00:41:29.639795 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.639807 | orchestrator | 2026-03-27 00:41:29.639819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.639831 | orchestrator | Friday 27 March 2026 00:41:24 +0000 (0:00:00.170) 0:00:01.978 ********** 2026-03-27 00:41:29.639844 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.639855 | orchestrator | 2026-03-27 00:41:29.639867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.639880 | orchestrator | Friday 27 March 2026 00:41:24 +0000 (0:00:00.151) 0:00:02.130 ********** 2026-03-27 00:41:29.639892 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.639905 | orchestrator | 2026-03-27 00:41:29.639917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.639930 | orchestrator | Friday 27 March 2026 00:41:25 +0000 (0:00:00.178) 0:00:02.309 ********** 2026-03-27 00:41:29.639942 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.639954 | orchestrator | 2026-03-27 00:41:29.639966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.639980 | orchestrator | Friday 27 March 2026 00:41:25 +0000 (0:00:00.178) 0:00:02.488 ********** 2026-03-27 00:41:29.639993 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.640005 | orchestrator | 2026-03-27 00:41:29.640017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.640029 | orchestrator | Friday 27 March 2026 00:41:25 +0000 (0:00:00.178) 0:00:02.666 ********** 2026-03-27 00:41:29.640042 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526) 2026-03-27 00:41:29.640055 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526) 2026-03-27 00:41:29.640068 | orchestrator | 2026-03-27 00:41:29.640079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.640148 | orchestrator | Friday 27 March 2026 00:41:25 +0000 (0:00:00.377) 0:00:03.044 ********** 2026-03-27 00:41:29.640160 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab) 2026-03-27 00:41:29.640172 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab) 2026-03-27 00:41:29.640182 | orchestrator | 2026-03-27 00:41:29.640193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.640212 | orchestrator | Friday 27 March 2026 00:41:26 +0000 (0:00:00.366) 0:00:03.410 ********** 2026-03-27 00:41:29.640223 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d) 2026-03-27 00:41:29.640234 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d) 2026-03-27 00:41:29.640244 | orchestrator | 2026-03-27 00:41:29.640255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.640266 | orchestrator | Friday 27 March 2026 00:41:26 +0000 (0:00:00.558) 0:00:03.969 ********** 2026-03-27 00:41:29.640276 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26) 2026-03-27 00:41:29.640287 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26) 2026-03-27 00:41:29.640298 | orchestrator | 2026-03-27 00:41:29.640308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:29.640319 | orchestrator | Friday 27 March 2026 00:41:27 +0000 (0:00:00.557) 0:00:04.526 ********** 2026-03-27 00:41:29.640329 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-27 00:41:29.640340 | orchestrator | 2026-03-27 00:41:29.640350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:29.640361 | orchestrator | Friday 27 March 2026 00:41:27 +0000 (0:00:00.680) 0:00:05.207 ********** 2026-03-27 00:41:29.640378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-27 00:41:29.640389 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-27 00:41:29.640399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-27 00:41:29.640410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-27 00:41:29.640420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-27 00:41:29.640431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-27 00:41:29.640441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-27 00:41:29.640452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-27 00:41:29.640463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-27 00:41:29.640473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-27 00:41:29.640484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-27 00:41:29.640494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-27 00:41:29.640505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-27 00:41:29.640515 | orchestrator | 2026-03-27 00:41:29.640526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:29.640536 | orchestrator | Friday 27 March 2026 00:41:28 +0000 (0:00:00.389) 0:00:05.597 ********** 2026-03-27 00:41:29.640547 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.640557 | orchestrator | 2026-03-27 00:41:29.640568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:29.640579 | orchestrator | Friday 27 March 2026 00:41:28 +0000 (0:00:00.181) 0:00:05.778 ********** 2026-03-27 00:41:29.640589 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.640600 | orchestrator | 2026-03-27 00:41:29.640610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:29.640621 | orchestrator | Friday 27 March 2026 00:41:28 +0000 (0:00:00.180) 0:00:05.959 ********** 2026-03-27 00:41:29.640632 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.640649 | orchestrator | 2026-03-27 00:41:29.640660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:29.640686 | orchestrator | Friday 27 March 2026 00:41:28 +0000 (0:00:00.185) 0:00:06.144 ********** 2026-03-27 00:41:29.640707 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.640717 | orchestrator | 2026-03-27 00:41:29.640728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:29.640738 | orchestrator | Friday 27 March 2026 00:41:29 +0000 (0:00:00.182) 0:00:06.327 ********** 2026-03-27 00:41:29.640749 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.640760 | orchestrator | 2026-03-27 00:41:29.640775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:29.640786 | orchestrator | Friday 27 March 2026 00:41:29 +0000 (0:00:00.177) 0:00:06.505 ********** 2026-03-27 00:41:29.640796 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.640807 | orchestrator | 2026-03-27 00:41:29.640817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:29.640828 | orchestrator | Friday 27 March 2026 00:41:29 +0000 (0:00:00.172) 0:00:06.677 ********** 2026-03-27 00:41:29.640839 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:29.640850 | orchestrator | 2026-03-27 00:41:29.640866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:36.709288 | orchestrator | Friday 27 March 2026 00:41:29 +0000 (0:00:00.175) 0:00:06.853 ********** 2026-03-27 00:41:36.709393 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.709409 | orchestrator | 2026-03-27 00:41:36.709422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:36.709433 | orchestrator | Friday 27 March 2026 00:41:29 +0000 (0:00:00.167) 0:00:07.020 ********** 2026-03-27 00:41:36.709444 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-27 00:41:36.709456 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-27 00:41:36.709467 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-27 00:41:36.709478 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-27 00:41:36.709489 | orchestrator | 2026-03-27 00:41:36.709500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:36.709511 | orchestrator | Friday 27 March 2026 00:41:30 +0000 (0:00:00.818) 0:00:07.839 ********** 2026-03-27 00:41:36.709522 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.709533 | orchestrator | 2026-03-27 00:41:36.709544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:36.709555 | orchestrator | Friday 27 March 2026 00:41:30 +0000 (0:00:00.188) 0:00:08.027 ********** 2026-03-27 00:41:36.709565 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.709576 | orchestrator | 2026-03-27 00:41:36.709587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:36.709598 | orchestrator | Friday 27 March 2026 00:41:30 +0000 (0:00:00.190) 0:00:08.217 ********** 2026-03-27 00:41:36.709609 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.709620 | orchestrator | 2026-03-27 00:41:36.709658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:36.709669 | orchestrator | Friday 27 March 2026 00:41:31 +0000 (0:00:00.213) 0:00:08.431 ********** 2026-03-27 00:41:36.709680 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.709691 | orchestrator | 2026-03-27 00:41:36.709701 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-27 00:41:36.709712 | orchestrator | Friday 27 March 2026 00:41:31 +0000 (0:00:00.198) 0:00:08.629 ********** 2026-03-27 00:41:36.709724 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-27 00:41:36.709734 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-27 00:41:36.709745 | orchestrator | 2026-03-27 00:41:36.709756 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-27 00:41:36.709767 | orchestrator | Friday 27 March 2026 00:41:31 +0000 (0:00:00.178) 0:00:08.808 ********** 2026-03-27 00:41:36.709799 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.709810 | orchestrator | 2026-03-27 00:41:36.709823 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-27 00:41:36.709836 | orchestrator | Friday 27 March 2026 00:41:31 +0000 (0:00:00.137) 0:00:08.945 ********** 2026-03-27 00:41:36.709847 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.709859 | orchestrator | 2026-03-27 00:41:36.709873 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-27 00:41:36.709886 | orchestrator | Friday 27 March 2026 00:41:31 +0000 (0:00:00.131) 0:00:09.077 ********** 2026-03-27 00:41:36.709898 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.709910 | orchestrator | 2026-03-27 00:41:36.709923 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-27 00:41:36.709935 | orchestrator | Friday 27 March 2026 00:41:31 +0000 (0:00:00.120) 0:00:09.197 ********** 2026-03-27 00:41:36.709947 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:41:36.709959 | orchestrator | 2026-03-27 00:41:36.709971 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-27 00:41:36.709983 | orchestrator | Friday 27 March 2026 00:41:32 +0000 (0:00:00.136) 0:00:09.334 ********** 2026-03-27 00:41:36.709996 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49c52ee7-6668-5cd2-bd86-f7267953750e'}}) 2026-03-27 00:41:36.710010 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2cf1a901-b2f7-5490-8423-90f944953f5f'}}) 2026-03-27 00:41:36.710104 | orchestrator | 2026-03-27 00:41:36.710120 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-27 00:41:36.710132 | orchestrator | Friday 27 March 2026 00:41:32 +0000 (0:00:00.178) 0:00:09.513 ********** 2026-03-27 00:41:36.710145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49c52ee7-6668-5cd2-bd86-f7267953750e'}})  2026-03-27 00:41:36.710171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2cf1a901-b2f7-5490-8423-90f944953f5f'}})  2026-03-27 00:41:36.710184 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.710194 | orchestrator | 2026-03-27 00:41:36.710237 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-27 00:41:36.710250 | orchestrator | Friday 27 March 2026 00:41:32 +0000 (0:00:00.147) 0:00:09.660 ********** 2026-03-27 00:41:36.710261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49c52ee7-6668-5cd2-bd86-f7267953750e'}})  2026-03-27 00:41:36.710272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2cf1a901-b2f7-5490-8423-90f944953f5f'}})  2026-03-27 00:41:36.710282 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.710293 | orchestrator | 2026-03-27 00:41:36.710304 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-27 00:41:36.710314 | orchestrator | Friday 27 March 2026 00:41:32 +0000 (0:00:00.357) 0:00:10.017 ********** 2026-03-27 00:41:36.710325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49c52ee7-6668-5cd2-bd86-f7267953750e'}})  2026-03-27 00:41:36.710355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2cf1a901-b2f7-5490-8423-90f944953f5f'}})  2026-03-27 00:41:36.710367 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.710377 | orchestrator | 2026-03-27 00:41:36.710388 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-27 00:41:36.710399 | orchestrator | Friday 27 March 2026 00:41:32 +0000 (0:00:00.130) 0:00:10.148 ********** 2026-03-27 00:41:36.710409 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:41:36.710420 | orchestrator | 2026-03-27 00:41:36.710431 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-27 00:41:36.710442 | orchestrator | Friday 27 March 2026 00:41:33 +0000 (0:00:00.124) 0:00:10.272 ********** 2026-03-27 00:41:36.710453 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:41:36.710473 | orchestrator | 2026-03-27 00:41:36.710484 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-27 00:41:36.710494 | orchestrator | Friday 27 March 2026 00:41:33 +0000 (0:00:00.129) 0:00:10.402 ********** 2026-03-27 00:41:36.710505 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.710516 | orchestrator | 2026-03-27 00:41:36.710537 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-27 00:41:36.710548 | orchestrator | Friday 27 March 2026 00:41:33 +0000 (0:00:00.133) 0:00:10.535 ********** 2026-03-27 00:41:36.710559 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.710570 | orchestrator | 2026-03-27 00:41:36.710580 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-27 00:41:36.710591 | orchestrator | Friday 27 March 2026 00:41:33 +0000 (0:00:00.135) 0:00:10.671 ********** 2026-03-27 00:41:36.710601 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.710612 | orchestrator | 2026-03-27 00:41:36.710623 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-27 00:41:36.710633 | orchestrator | Friday 27 March 2026 00:41:33 +0000 (0:00:00.136) 0:00:10.807 ********** 2026-03-27 00:41:36.710644 | orchestrator | ok: [testbed-node-3] => { 2026-03-27 00:41:36.710655 | orchestrator |  "ceph_osd_devices": { 2026-03-27 00:41:36.710666 | orchestrator |  "sdb": { 2026-03-27 00:41:36.710677 | orchestrator |  "osd_lvm_uuid": "49c52ee7-6668-5cd2-bd86-f7267953750e" 2026-03-27 00:41:36.710687 | orchestrator |  }, 2026-03-27 00:41:36.710698 | orchestrator |  "sdc": { 2026-03-27 00:41:36.710708 | orchestrator |  "osd_lvm_uuid": "2cf1a901-b2f7-5490-8423-90f944953f5f" 2026-03-27 00:41:36.710719 | orchestrator |  } 2026-03-27 00:41:36.710730 | orchestrator |  } 2026-03-27 00:41:36.710741 | orchestrator | } 2026-03-27 00:41:36.710752 | orchestrator | 2026-03-27 00:41:36.710762 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-27 00:41:36.710773 | orchestrator | Friday 27 March 2026 00:41:33 +0000 (0:00:00.135) 0:00:10.943 ********** 2026-03-27 00:41:36.710784 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.710795 | orchestrator | 2026-03-27 00:41:36.710806 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-27 00:41:36.710816 | orchestrator | Friday 27 March 2026 00:41:33 +0000 (0:00:00.122) 0:00:11.066 ********** 2026-03-27 00:41:36.710827 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.710838 | orchestrator | 2026-03-27 00:41:36.710849 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-27 00:41:36.710859 | orchestrator | Friday 27 March 2026 00:41:33 +0000 (0:00:00.129) 0:00:11.195 ********** 2026-03-27 00:41:36.710870 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:41:36.710881 | orchestrator | 2026-03-27 00:41:36.710891 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-27 00:41:36.710902 | orchestrator | Friday 27 March 2026 00:41:34 +0000 (0:00:00.111) 0:00:11.307 ********** 2026-03-27 00:41:36.710913 | orchestrator | changed: [testbed-node-3] => { 2026-03-27 00:41:36.710924 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-27 00:41:36.710935 | orchestrator |  "ceph_osd_devices": { 2026-03-27 00:41:36.710946 | orchestrator |  "sdb": { 2026-03-27 00:41:36.710957 | orchestrator |  "osd_lvm_uuid": "49c52ee7-6668-5cd2-bd86-f7267953750e" 2026-03-27 00:41:36.710968 | orchestrator |  }, 2026-03-27 00:41:36.710978 | orchestrator |  "sdc": { 2026-03-27 00:41:36.710989 | orchestrator |  "osd_lvm_uuid": "2cf1a901-b2f7-5490-8423-90f944953f5f" 2026-03-27 00:41:36.711000 | orchestrator |  } 2026-03-27 00:41:36.711010 | orchestrator |  }, 2026-03-27 00:41:36.711021 | orchestrator |  "lvm_volumes": [ 2026-03-27 00:41:36.711032 | orchestrator |  { 2026-03-27 00:41:36.711043 | orchestrator |  "data": "osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e", 2026-03-27 00:41:36.711053 | orchestrator |  "data_vg": "ceph-49c52ee7-6668-5cd2-bd86-f7267953750e" 2026-03-27 00:41:36.711070 | orchestrator |  }, 2026-03-27 00:41:36.711134 | orchestrator |  { 2026-03-27 00:41:36.711147 | orchestrator |  "data": "osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f", 2026-03-27 00:41:36.711158 | orchestrator |  "data_vg": "ceph-2cf1a901-b2f7-5490-8423-90f944953f5f" 2026-03-27 00:41:36.711169 | orchestrator |  } 2026-03-27 00:41:36.711179 | orchestrator |  ] 2026-03-27 00:41:36.711190 | orchestrator |  } 2026-03-27 00:41:36.711201 | orchestrator | } 2026-03-27 00:41:36.711211 | orchestrator | 2026-03-27 00:41:36.711222 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-27 00:41:36.711233 | orchestrator | Friday 27 March 2026 00:41:34 +0000 (0:00:00.208) 0:00:11.516 ********** 2026-03-27 00:41:36.711244 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 00:41:36.711254 | orchestrator | 2026-03-27 00:41:36.711265 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-27 00:41:36.711276 | orchestrator | 2026-03-27 00:41:36.711286 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-27 00:41:36.711297 | orchestrator | Friday 27 March 2026 00:41:36 +0000 (0:00:01.974) 0:00:13.490 ********** 2026-03-27 00:41:36.711308 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-27 00:41:36.711318 | orchestrator | 2026-03-27 00:41:36.711334 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-27 00:41:36.711346 | orchestrator | Friday 27 March 2026 00:41:36 +0000 (0:00:00.224) 0:00:13.715 ********** 2026-03-27 00:41:36.711357 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:41:36.711368 | orchestrator | 2026-03-27 00:41:36.711386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.128547 | orchestrator | Friday 27 March 2026 00:41:36 +0000 (0:00:00.210) 0:00:13.925 ********** 2026-03-27 00:41:44.128682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-27 00:41:44.128714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-27 00:41:44.128732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-27 00:41:44.128750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-27 00:41:44.128768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-27 00:41:44.128786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-27 00:41:44.128806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-27 00:41:44.128830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-27 00:41:44.128850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-27 00:41:44.128869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-27 00:41:44.128887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-27 00:41:44.128907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-27 00:41:44.128925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-27 00:41:44.128943 | orchestrator | 2026-03-27 00:41:44.128963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.128980 | orchestrator | Friday 27 March 2026 00:41:37 +0000 (0:00:00.324) 0:00:14.250 ********** 2026-03-27 00:41:44.128997 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.129018 | orchestrator | 2026-03-27 00:41:44.129035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129163 | orchestrator | Friday 27 March 2026 00:41:37 +0000 (0:00:00.176) 0:00:14.426 ********** 2026-03-27 00:41:44.129225 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.129246 | orchestrator | 2026-03-27 00:41:44.129265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129282 | orchestrator | Friday 27 March 2026 00:41:37 +0000 (0:00:00.173) 0:00:14.599 ********** 2026-03-27 00:41:44.129301 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.129320 | orchestrator | 2026-03-27 00:41:44.129338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129356 | orchestrator | Friday 27 March 2026 00:41:37 +0000 (0:00:00.183) 0:00:14.783 ********** 2026-03-27 00:41:44.129373 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.129391 | orchestrator | 2026-03-27 00:41:44.129410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129427 | orchestrator | Friday 27 March 2026 00:41:37 +0000 (0:00:00.170) 0:00:14.953 ********** 2026-03-27 00:41:44.129446 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.129516 | orchestrator | 2026-03-27 00:41:44.129536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129553 | orchestrator | Friday 27 March 2026 00:41:38 +0000 (0:00:00.515) 0:00:15.469 ********** 2026-03-27 00:41:44.129571 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.129591 | orchestrator | 2026-03-27 00:41:44.129609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129627 | orchestrator | Friday 27 March 2026 00:41:38 +0000 (0:00:00.172) 0:00:15.642 ********** 2026-03-27 00:41:44.129644 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.129661 | orchestrator | 2026-03-27 00:41:44.129679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129697 | orchestrator | Friday 27 March 2026 00:41:38 +0000 (0:00:00.193) 0:00:15.836 ********** 2026-03-27 00:41:44.129715 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.129732 | orchestrator | 2026-03-27 00:41:44.129750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129768 | orchestrator | Friday 27 March 2026 00:41:38 +0000 (0:00:00.198) 0:00:16.034 ********** 2026-03-27 00:41:44.129785 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376) 2026-03-27 00:41:44.129806 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376) 2026-03-27 00:41:44.129823 | orchestrator | 2026-03-27 00:41:44.129865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129886 | orchestrator | Friday 27 March 2026 00:41:39 +0000 (0:00:00.406) 0:00:16.440 ********** 2026-03-27 00:41:44.129906 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231) 2026-03-27 00:41:44.129924 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231) 2026-03-27 00:41:44.129942 | orchestrator | 2026-03-27 00:41:44.129961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.129978 | orchestrator | Friday 27 March 2026 00:41:39 +0000 (0:00:00.425) 0:00:16.866 ********** 2026-03-27 00:41:44.129997 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022) 2026-03-27 00:41:44.130014 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022) 2026-03-27 00:41:44.130136 | orchestrator | 2026-03-27 00:41:44.130156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.130207 | orchestrator | Friday 27 March 2026 00:41:40 +0000 (0:00:00.434) 0:00:17.301 ********** 2026-03-27 00:41:44.130227 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef) 2026-03-27 00:41:44.130248 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef) 2026-03-27 00:41:44.130268 | orchestrator | 2026-03-27 00:41:44.130304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:44.130326 | orchestrator | Friday 27 March 2026 00:41:40 +0000 (0:00:00.462) 0:00:17.763 ********** 2026-03-27 00:41:44.130345 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-27 00:41:44.130365 | orchestrator | 2026-03-27 00:41:44.130383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.130402 | orchestrator | Friday 27 March 2026 00:41:40 +0000 (0:00:00.314) 0:00:18.078 ********** 2026-03-27 00:41:44.130421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-27 00:41:44.130441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-27 00:41:44.130461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-27 00:41:44.130480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-27 00:41:44.130499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-27 00:41:44.130519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-27 00:41:44.130539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-27 00:41:44.130559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-27 00:41:44.130579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-27 00:41:44.130596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-27 00:41:44.130616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-27 00:41:44.130637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-27 00:41:44.130656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-27 00:41:44.130677 | orchestrator | 2026-03-27 00:41:44.130696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.130716 | orchestrator | Friday 27 March 2026 00:41:41 +0000 (0:00:00.372) 0:00:18.450 ********** 2026-03-27 00:41:44.130736 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.130755 | orchestrator | 2026-03-27 00:41:44.130774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.130809 | orchestrator | Friday 27 March 2026 00:41:41 +0000 (0:00:00.202) 0:00:18.653 ********** 2026-03-27 00:41:44.130828 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.130846 | orchestrator | 2026-03-27 00:41:44.130863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.130879 | orchestrator | Friday 27 March 2026 00:41:42 +0000 (0:00:00.743) 0:00:19.396 ********** 2026-03-27 00:41:44.130897 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.130914 | orchestrator | 2026-03-27 00:41:44.130931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.130949 | orchestrator | Friday 27 March 2026 00:41:42 +0000 (0:00:00.202) 0:00:19.598 ********** 2026-03-27 00:41:44.130967 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.130985 | orchestrator | 2026-03-27 00:41:44.131003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.131021 | orchestrator | Friday 27 March 2026 00:41:42 +0000 (0:00:00.194) 0:00:19.793 ********** 2026-03-27 00:41:44.131039 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.131058 | orchestrator | 2026-03-27 00:41:44.131177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.131199 | orchestrator | Friday 27 March 2026 00:41:42 +0000 (0:00:00.195) 0:00:19.988 ********** 2026-03-27 00:41:44.131218 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.131251 | orchestrator | 2026-03-27 00:41:44.131282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.131300 | orchestrator | Friday 27 March 2026 00:41:42 +0000 (0:00:00.175) 0:00:20.164 ********** 2026-03-27 00:41:44.131318 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.131335 | orchestrator | 2026-03-27 00:41:44.131352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.131370 | orchestrator | Friday 27 March 2026 00:41:43 +0000 (0:00:00.215) 0:00:20.379 ********** 2026-03-27 00:41:44.131388 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:44.131406 | orchestrator | 2026-03-27 00:41:44.131423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.131441 | orchestrator | Friday 27 March 2026 00:41:43 +0000 (0:00:00.188) 0:00:20.568 ********** 2026-03-27 00:41:44.131459 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-27 00:41:44.131478 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-27 00:41:44.131496 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-27 00:41:44.131514 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-27 00:41:44.131532 | orchestrator | 2026-03-27 00:41:44.131551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:44.131569 | orchestrator | Friday 27 March 2026 00:41:43 +0000 (0:00:00.646) 0:00:21.214 ********** 2026-03-27 00:41:44.131587 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397138 | orchestrator | 2026-03-27 00:41:51.397222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:51.397233 | orchestrator | Friday 27 March 2026 00:41:44 +0000 (0:00:00.211) 0:00:21.426 ********** 2026-03-27 00:41:51.397241 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397249 | orchestrator | 2026-03-27 00:41:51.397256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:51.397263 | orchestrator | Friday 27 March 2026 00:41:44 +0000 (0:00:00.215) 0:00:21.641 ********** 2026-03-27 00:41:51.397270 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397277 | orchestrator | 2026-03-27 00:41:51.397283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:51.397290 | orchestrator | Friday 27 March 2026 00:41:44 +0000 (0:00:00.183) 0:00:21.825 ********** 2026-03-27 00:41:51.397297 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397303 | orchestrator | 2026-03-27 00:41:51.397310 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-27 00:41:51.397317 | orchestrator | Friday 27 March 2026 00:41:44 +0000 (0:00:00.196) 0:00:22.021 ********** 2026-03-27 00:41:51.397324 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-27 00:41:51.397331 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-27 00:41:51.397338 | orchestrator | 2026-03-27 00:41:51.397344 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-27 00:41:51.397351 | orchestrator | Friday 27 March 2026 00:41:45 +0000 (0:00:00.411) 0:00:22.433 ********** 2026-03-27 00:41:51.397358 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397364 | orchestrator | 2026-03-27 00:41:51.397371 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-27 00:41:51.397378 | orchestrator | Friday 27 March 2026 00:41:45 +0000 (0:00:00.150) 0:00:22.584 ********** 2026-03-27 00:41:51.397384 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397391 | orchestrator | 2026-03-27 00:41:51.397397 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-27 00:41:51.397404 | orchestrator | Friday 27 March 2026 00:41:45 +0000 (0:00:00.143) 0:00:22.728 ********** 2026-03-27 00:41:51.397411 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397418 | orchestrator | 2026-03-27 00:41:51.397425 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-27 00:41:51.397431 | orchestrator | Friday 27 March 2026 00:41:45 +0000 (0:00:00.133) 0:00:22.861 ********** 2026-03-27 00:41:51.397457 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:41:51.397466 | orchestrator | 2026-03-27 00:41:51.397472 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-27 00:41:51.397479 | orchestrator | Friday 27 March 2026 00:41:45 +0000 (0:00:00.148) 0:00:23.009 ********** 2026-03-27 00:41:51.397486 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'}}) 2026-03-27 00:41:51.397493 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '627e7bc4-4e7d-5af1-903b-8d115676372d'}}) 2026-03-27 00:41:51.397500 | orchestrator | 2026-03-27 00:41:51.397507 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-27 00:41:51.397513 | orchestrator | Friday 27 March 2026 00:41:45 +0000 (0:00:00.179) 0:00:23.189 ********** 2026-03-27 00:41:51.397521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'}})  2026-03-27 00:41:51.397528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '627e7bc4-4e7d-5af1-903b-8d115676372d'}})  2026-03-27 00:41:51.397535 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397541 | orchestrator | 2026-03-27 00:41:51.397548 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-27 00:41:51.397555 | orchestrator | Friday 27 March 2026 00:41:46 +0000 (0:00:00.167) 0:00:23.356 ********** 2026-03-27 00:41:51.397561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'}})  2026-03-27 00:41:51.397568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '627e7bc4-4e7d-5af1-903b-8d115676372d'}})  2026-03-27 00:41:51.397575 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397582 | orchestrator | 2026-03-27 00:41:51.397589 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-27 00:41:51.397595 | orchestrator | Friday 27 March 2026 00:41:46 +0000 (0:00:00.172) 0:00:23.528 ********** 2026-03-27 00:41:51.397602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'}})  2026-03-27 00:41:51.397609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '627e7bc4-4e7d-5af1-903b-8d115676372d'}})  2026-03-27 00:41:51.397615 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397622 | orchestrator | 2026-03-27 00:41:51.397642 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-27 00:41:51.397650 | orchestrator | Friday 27 March 2026 00:41:46 +0000 (0:00:00.166) 0:00:23.695 ********** 2026-03-27 00:41:51.397658 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:41:51.397665 | orchestrator | 2026-03-27 00:41:51.397673 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-27 00:41:51.397680 | orchestrator | Friday 27 March 2026 00:41:46 +0000 (0:00:00.159) 0:00:23.855 ********** 2026-03-27 00:41:51.397687 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:41:51.397695 | orchestrator | 2026-03-27 00:41:51.397703 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-27 00:41:51.397711 | orchestrator | Friday 27 March 2026 00:41:46 +0000 (0:00:00.129) 0:00:23.985 ********** 2026-03-27 00:41:51.397731 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397739 | orchestrator | 2026-03-27 00:41:51.397746 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-27 00:41:51.397754 | orchestrator | Friday 27 March 2026 00:41:46 +0000 (0:00:00.130) 0:00:24.116 ********** 2026-03-27 00:41:51.397761 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397769 | orchestrator | 2026-03-27 00:41:51.397776 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-27 00:41:51.397783 | orchestrator | Friday 27 March 2026 00:41:47 +0000 (0:00:00.423) 0:00:24.539 ********** 2026-03-27 00:41:51.397791 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397803 | orchestrator | 2026-03-27 00:41:51.397811 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-27 00:41:51.397818 | orchestrator | Friday 27 March 2026 00:41:47 +0000 (0:00:00.115) 0:00:24.654 ********** 2026-03-27 00:41:51.397825 | orchestrator | ok: [testbed-node-4] => { 2026-03-27 00:41:51.397833 | orchestrator |  "ceph_osd_devices": { 2026-03-27 00:41:51.397840 | orchestrator |  "sdb": { 2026-03-27 00:41:51.397848 | orchestrator |  "osd_lvm_uuid": "b8da8e02-1f61-55dd-bf76-a4ff2d17c49f" 2026-03-27 00:41:51.397856 | orchestrator |  }, 2026-03-27 00:41:51.397864 | orchestrator |  "sdc": { 2026-03-27 00:41:51.397871 | orchestrator |  "osd_lvm_uuid": "627e7bc4-4e7d-5af1-903b-8d115676372d" 2026-03-27 00:41:51.397879 | orchestrator |  } 2026-03-27 00:41:51.397886 | orchestrator |  } 2026-03-27 00:41:51.397894 | orchestrator | } 2026-03-27 00:41:51.397901 | orchestrator | 2026-03-27 00:41:51.397908 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-27 00:41:51.397915 | orchestrator | Friday 27 March 2026 00:41:47 +0000 (0:00:00.151) 0:00:24.806 ********** 2026-03-27 00:41:51.397923 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397930 | orchestrator | 2026-03-27 00:41:51.397938 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-27 00:41:51.397945 | orchestrator | Friday 27 March 2026 00:41:47 +0000 (0:00:00.129) 0:00:24.935 ********** 2026-03-27 00:41:51.397952 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397959 | orchestrator | 2026-03-27 00:41:51.397966 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-27 00:41:51.397974 | orchestrator | Friday 27 March 2026 00:41:47 +0000 (0:00:00.127) 0:00:25.062 ********** 2026-03-27 00:41:51.397981 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:41:51.397989 | orchestrator | 2026-03-27 00:41:51.397996 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-27 00:41:51.398004 | orchestrator | Friday 27 March 2026 00:41:47 +0000 (0:00:00.134) 0:00:25.197 ********** 2026-03-27 00:41:51.398011 | orchestrator | changed: [testbed-node-4] => { 2026-03-27 00:41:51.398075 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-27 00:41:51.398084 | orchestrator |  "ceph_osd_devices": { 2026-03-27 00:41:51.398090 | orchestrator |  "sdb": { 2026-03-27 00:41:51.398097 | orchestrator |  "osd_lvm_uuid": "b8da8e02-1f61-55dd-bf76-a4ff2d17c49f" 2026-03-27 00:41:51.398103 | orchestrator |  }, 2026-03-27 00:41:51.398110 | orchestrator |  "sdc": { 2026-03-27 00:41:51.398117 | orchestrator |  "osd_lvm_uuid": "627e7bc4-4e7d-5af1-903b-8d115676372d" 2026-03-27 00:41:51.398123 | orchestrator |  } 2026-03-27 00:41:51.398130 | orchestrator |  }, 2026-03-27 00:41:51.398136 | orchestrator |  "lvm_volumes": [ 2026-03-27 00:41:51.398143 | orchestrator |  { 2026-03-27 00:41:51.398149 | orchestrator |  "data": "osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f", 2026-03-27 00:41:51.398156 | orchestrator |  "data_vg": "ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f" 2026-03-27 00:41:51.398162 | orchestrator |  }, 2026-03-27 00:41:51.398169 | orchestrator |  { 2026-03-27 00:41:51.398175 | orchestrator |  "data": "osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d", 2026-03-27 00:41:51.398182 | orchestrator |  "data_vg": "ceph-627e7bc4-4e7d-5af1-903b-8d115676372d" 2026-03-27 00:41:51.398188 | orchestrator |  } 2026-03-27 00:41:51.398195 | orchestrator |  ] 2026-03-27 00:41:51.398201 | orchestrator |  } 2026-03-27 00:41:51.398208 | orchestrator | } 2026-03-27 00:41:51.398215 | orchestrator | 2026-03-27 00:41:51.398221 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-27 00:41:51.398228 | orchestrator | Friday 27 March 2026 00:41:48 +0000 (0:00:00.240) 0:00:25.437 ********** 2026-03-27 00:41:51.398235 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-27 00:41:51.398241 | orchestrator | 2026-03-27 00:41:51.398253 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-27 00:41:51.398260 | orchestrator | 2026-03-27 00:41:51.398266 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-27 00:41:51.398273 | orchestrator | Friday 27 March 2026 00:41:49 +0000 (0:00:01.149) 0:00:26.586 ********** 2026-03-27 00:41:51.398279 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-27 00:41:51.398286 | orchestrator | 2026-03-27 00:41:51.398293 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-27 00:41:51.398299 | orchestrator | Friday 27 March 2026 00:41:49 +0000 (0:00:00.504) 0:00:27.090 ********** 2026-03-27 00:41:51.398306 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:41:51.398312 | orchestrator | 2026-03-27 00:41:51.398319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:51.398325 | orchestrator | Friday 27 March 2026 00:41:50 +0000 (0:00:00.981) 0:00:28.072 ********** 2026-03-27 00:41:51.398332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-27 00:41:51.398338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-27 00:41:51.398345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-27 00:41:51.398351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-27 00:41:51.398358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-27 00:41:51.398370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-27 00:41:59.228199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-27 00:41:59.228274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-27 00:41:59.228281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-27 00:41:59.228286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-27 00:41:59.228303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-27 00:41:59.228308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-27 00:41:59.228312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-27 00:41:59.228316 | orchestrator | 2026-03-27 00:41:59.228321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228326 | orchestrator | Friday 27 March 2026 00:41:51 +0000 (0:00:00.557) 0:00:28.630 ********** 2026-03-27 00:41:59.228331 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228336 | orchestrator | 2026-03-27 00:41:59.228340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228344 | orchestrator | Friday 27 March 2026 00:41:51 +0000 (0:00:00.170) 0:00:28.800 ********** 2026-03-27 00:41:59.228348 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228351 | orchestrator | 2026-03-27 00:41:59.228355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228359 | orchestrator | Friday 27 March 2026 00:41:51 +0000 (0:00:00.174) 0:00:28.974 ********** 2026-03-27 00:41:59.228363 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228367 | orchestrator | 2026-03-27 00:41:59.228370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228374 | orchestrator | Friday 27 March 2026 00:41:51 +0000 (0:00:00.177) 0:00:29.152 ********** 2026-03-27 00:41:59.228380 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228384 | orchestrator | 2026-03-27 00:41:59.228388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228391 | orchestrator | Friday 27 March 2026 00:41:52 +0000 (0:00:00.169) 0:00:29.321 ********** 2026-03-27 00:41:59.228409 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228413 | orchestrator | 2026-03-27 00:41:59.228417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228421 | orchestrator | Friday 27 March 2026 00:41:52 +0000 (0:00:00.180) 0:00:29.501 ********** 2026-03-27 00:41:59.228425 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228428 | orchestrator | 2026-03-27 00:41:59.228432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228436 | orchestrator | Friday 27 March 2026 00:41:52 +0000 (0:00:00.259) 0:00:29.760 ********** 2026-03-27 00:41:59.228440 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228443 | orchestrator | 2026-03-27 00:41:59.228448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228451 | orchestrator | Friday 27 March 2026 00:41:52 +0000 (0:00:00.180) 0:00:29.940 ********** 2026-03-27 00:41:59.228455 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228459 | orchestrator | 2026-03-27 00:41:59.228463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228467 | orchestrator | Friday 27 March 2026 00:41:52 +0000 (0:00:00.179) 0:00:30.119 ********** 2026-03-27 00:41:59.228470 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b) 2026-03-27 00:41:59.228475 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b) 2026-03-27 00:41:59.228479 | orchestrator | 2026-03-27 00:41:59.228483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228486 | orchestrator | Friday 27 March 2026 00:41:53 +0000 (0:00:00.555) 0:00:30.675 ********** 2026-03-27 00:41:59.228490 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c) 2026-03-27 00:41:59.228494 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c) 2026-03-27 00:41:59.228498 | orchestrator | 2026-03-27 00:41:59.228501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228505 | orchestrator | Friday 27 March 2026 00:41:54 +0000 (0:00:00.780) 0:00:31.455 ********** 2026-03-27 00:41:59.228509 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617) 2026-03-27 00:41:59.228513 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617) 2026-03-27 00:41:59.228516 | orchestrator | 2026-03-27 00:41:59.228520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228524 | orchestrator | Friday 27 March 2026 00:41:54 +0000 (0:00:00.435) 0:00:31.890 ********** 2026-03-27 00:41:59.228528 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984) 2026-03-27 00:41:59.228531 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984) 2026-03-27 00:41:59.228535 | orchestrator | 2026-03-27 00:41:59.228539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:41:59.228542 | orchestrator | Friday 27 March 2026 00:41:55 +0000 (0:00:00.391) 0:00:32.282 ********** 2026-03-27 00:41:59.228546 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-27 00:41:59.228550 | orchestrator | 2026-03-27 00:41:59.228554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228568 | orchestrator | Friday 27 March 2026 00:41:55 +0000 (0:00:00.292) 0:00:32.574 ********** 2026-03-27 00:41:59.228572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-27 00:41:59.228576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-27 00:41:59.228580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-27 00:41:59.228584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-27 00:41:59.228591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-27 00:41:59.228594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-27 00:41:59.228598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-27 00:41:59.228602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-27 00:41:59.228606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-27 00:41:59.228609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-27 00:41:59.228613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-27 00:41:59.228617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-27 00:41:59.228620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-27 00:41:59.228624 | orchestrator | 2026-03-27 00:41:59.228628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228632 | orchestrator | Friday 27 March 2026 00:41:55 +0000 (0:00:00.407) 0:00:32.982 ********** 2026-03-27 00:41:59.228635 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228639 | orchestrator | 2026-03-27 00:41:59.228643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228647 | orchestrator | Friday 27 March 2026 00:41:55 +0000 (0:00:00.192) 0:00:33.174 ********** 2026-03-27 00:41:59.228650 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228654 | orchestrator | 2026-03-27 00:41:59.228658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228661 | orchestrator | Friday 27 March 2026 00:41:56 +0000 (0:00:00.200) 0:00:33.375 ********** 2026-03-27 00:41:59.228665 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228669 | orchestrator | 2026-03-27 00:41:59.228673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228679 | orchestrator | Friday 27 March 2026 00:41:56 +0000 (0:00:00.208) 0:00:33.584 ********** 2026-03-27 00:41:59.228683 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228687 | orchestrator | 2026-03-27 00:41:59.228691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228695 | orchestrator | Friday 27 March 2026 00:41:56 +0000 (0:00:00.189) 0:00:33.774 ********** 2026-03-27 00:41:59.228698 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228702 | orchestrator | 2026-03-27 00:41:59.228706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228710 | orchestrator | Friday 27 March 2026 00:41:56 +0000 (0:00:00.205) 0:00:33.980 ********** 2026-03-27 00:41:59.228713 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228717 | orchestrator | 2026-03-27 00:41:59.228721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228725 | orchestrator | Friday 27 March 2026 00:41:57 +0000 (0:00:00.588) 0:00:34.568 ********** 2026-03-27 00:41:59.228728 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228732 | orchestrator | 2026-03-27 00:41:59.228736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228740 | orchestrator | Friday 27 March 2026 00:41:57 +0000 (0:00:00.189) 0:00:34.757 ********** 2026-03-27 00:41:59.228743 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228747 | orchestrator | 2026-03-27 00:41:59.228751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228755 | orchestrator | Friday 27 March 2026 00:41:57 +0000 (0:00:00.188) 0:00:34.945 ********** 2026-03-27 00:41:59.228759 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-27 00:41:59.228767 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-27 00:41:59.228771 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-27 00:41:59.228776 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-27 00:41:59.228780 | orchestrator | 2026-03-27 00:41:59.228784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228788 | orchestrator | Friday 27 March 2026 00:41:58 +0000 (0:00:00.644) 0:00:35.590 ********** 2026-03-27 00:41:59.228793 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228797 | orchestrator | 2026-03-27 00:41:59.228801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228805 | orchestrator | Friday 27 March 2026 00:41:58 +0000 (0:00:00.238) 0:00:35.829 ********** 2026-03-27 00:41:59.228810 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228814 | orchestrator | 2026-03-27 00:41:59.228818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228822 | orchestrator | Friday 27 March 2026 00:41:58 +0000 (0:00:00.204) 0:00:36.033 ********** 2026-03-27 00:41:59.228826 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228830 | orchestrator | 2026-03-27 00:41:59.228834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:41:59.228839 | orchestrator | Friday 27 March 2026 00:41:59 +0000 (0:00:00.228) 0:00:36.262 ********** 2026-03-27 00:41:59.228843 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:41:59.228847 | orchestrator | 2026-03-27 00:41:59.228854 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-27 00:42:03.349731 | orchestrator | Friday 27 March 2026 00:41:59 +0000 (0:00:00.183) 0:00:36.446 ********** 2026-03-27 00:42:03.349830 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-27 00:42:03.349846 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-27 00:42:03.349859 | orchestrator | 2026-03-27 00:42:03.349871 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-27 00:42:03.349882 | orchestrator | Friday 27 March 2026 00:41:59 +0000 (0:00:00.174) 0:00:36.620 ********** 2026-03-27 00:42:03.349893 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.349904 | orchestrator | 2026-03-27 00:42:03.349915 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-27 00:42:03.349926 | orchestrator | Friday 27 March 2026 00:41:59 +0000 (0:00:00.139) 0:00:36.760 ********** 2026-03-27 00:42:03.349936 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.349947 | orchestrator | 2026-03-27 00:42:03.349957 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-27 00:42:03.349968 | orchestrator | Friday 27 March 2026 00:41:59 +0000 (0:00:00.176) 0:00:36.937 ********** 2026-03-27 00:42:03.349979 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.349989 | orchestrator | 2026-03-27 00:42:03.350001 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-27 00:42:03.350012 | orchestrator | Friday 27 March 2026 00:41:59 +0000 (0:00:00.150) 0:00:37.087 ********** 2026-03-27 00:42:03.350108 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:42:03.350121 | orchestrator | 2026-03-27 00:42:03.350132 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-27 00:42:03.350143 | orchestrator | Friday 27 March 2026 00:42:00 +0000 (0:00:00.367) 0:00:37.454 ********** 2026-03-27 00:42:03.350154 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb6fbf97-7198-5485-83ee-7be3b389ad62'}}) 2026-03-27 00:42:03.350165 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'}}) 2026-03-27 00:42:03.350176 | orchestrator | 2026-03-27 00:42:03.350187 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-27 00:42:03.350198 | orchestrator | Friday 27 March 2026 00:42:00 +0000 (0:00:00.180) 0:00:37.635 ********** 2026-03-27 00:42:03.350209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb6fbf97-7198-5485-83ee-7be3b389ad62'}})  2026-03-27 00:42:03.350248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'}})  2026-03-27 00:42:03.350262 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.350275 | orchestrator | 2026-03-27 00:42:03.350299 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-27 00:42:03.350312 | orchestrator | Friday 27 March 2026 00:42:00 +0000 (0:00:00.178) 0:00:37.813 ********** 2026-03-27 00:42:03.350324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb6fbf97-7198-5485-83ee-7be3b389ad62'}})  2026-03-27 00:42:03.350337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'}})  2026-03-27 00:42:03.350349 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.350361 | orchestrator | 2026-03-27 00:42:03.350374 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-27 00:42:03.350386 | orchestrator | Friday 27 March 2026 00:42:00 +0000 (0:00:00.167) 0:00:37.981 ********** 2026-03-27 00:42:03.350398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb6fbf97-7198-5485-83ee-7be3b389ad62'}})  2026-03-27 00:42:03.350411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'}})  2026-03-27 00:42:03.350423 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.350435 | orchestrator | 2026-03-27 00:42:03.350448 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-27 00:42:03.350460 | orchestrator | Friday 27 March 2026 00:42:00 +0000 (0:00:00.144) 0:00:38.125 ********** 2026-03-27 00:42:03.350470 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:42:03.350481 | orchestrator | 2026-03-27 00:42:03.350491 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-27 00:42:03.350502 | orchestrator | Friday 27 March 2026 00:42:01 +0000 (0:00:00.136) 0:00:38.262 ********** 2026-03-27 00:42:03.350513 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:42:03.350523 | orchestrator | 2026-03-27 00:42:03.350534 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-27 00:42:03.350544 | orchestrator | Friday 27 March 2026 00:42:01 +0000 (0:00:00.265) 0:00:38.527 ********** 2026-03-27 00:42:03.350555 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.350565 | orchestrator | 2026-03-27 00:42:03.350576 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-27 00:42:03.350587 | orchestrator | Friday 27 March 2026 00:42:01 +0000 (0:00:00.145) 0:00:38.673 ********** 2026-03-27 00:42:03.350597 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.350608 | orchestrator | 2026-03-27 00:42:03.350618 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-27 00:42:03.350629 | orchestrator | Friday 27 March 2026 00:42:01 +0000 (0:00:00.154) 0:00:38.827 ********** 2026-03-27 00:42:03.350639 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.350650 | orchestrator | 2026-03-27 00:42:03.350661 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-27 00:42:03.350671 | orchestrator | Friday 27 March 2026 00:42:01 +0000 (0:00:00.122) 0:00:38.950 ********** 2026-03-27 00:42:03.350682 | orchestrator | ok: [testbed-node-5] => { 2026-03-27 00:42:03.350693 | orchestrator |  "ceph_osd_devices": { 2026-03-27 00:42:03.350704 | orchestrator |  "sdb": { 2026-03-27 00:42:03.350733 | orchestrator |  "osd_lvm_uuid": "bb6fbf97-7198-5485-83ee-7be3b389ad62" 2026-03-27 00:42:03.350745 | orchestrator |  }, 2026-03-27 00:42:03.350756 | orchestrator |  "sdc": { 2026-03-27 00:42:03.350783 | orchestrator |  "osd_lvm_uuid": "f9aa8e5e-9a1f-5185-aaa5-5b53eb599331" 2026-03-27 00:42:03.350794 | orchestrator |  } 2026-03-27 00:42:03.350805 | orchestrator |  } 2026-03-27 00:42:03.350816 | orchestrator | } 2026-03-27 00:42:03.350827 | orchestrator | 2026-03-27 00:42:03.350881 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-27 00:42:03.350893 | orchestrator | Friday 27 March 2026 00:42:01 +0000 (0:00:00.182) 0:00:39.133 ********** 2026-03-27 00:42:03.350903 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.350914 | orchestrator | 2026-03-27 00:42:03.350925 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-27 00:42:03.350935 | orchestrator | Friday 27 March 2026 00:42:02 +0000 (0:00:00.154) 0:00:39.287 ********** 2026-03-27 00:42:03.350946 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.350956 | orchestrator | 2026-03-27 00:42:03.350967 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-27 00:42:03.350978 | orchestrator | Friday 27 March 2026 00:42:02 +0000 (0:00:00.247) 0:00:39.534 ********** 2026-03-27 00:42:03.350988 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:42:03.350999 | orchestrator | 2026-03-27 00:42:03.351009 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-27 00:42:03.351020 | orchestrator | Friday 27 March 2026 00:42:02 +0000 (0:00:00.092) 0:00:39.627 ********** 2026-03-27 00:42:03.351030 | orchestrator | changed: [testbed-node-5] => { 2026-03-27 00:42:03.351041 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-27 00:42:03.351052 | orchestrator |  "ceph_osd_devices": { 2026-03-27 00:42:03.351096 | orchestrator |  "sdb": { 2026-03-27 00:42:03.351107 | orchestrator |  "osd_lvm_uuid": "bb6fbf97-7198-5485-83ee-7be3b389ad62" 2026-03-27 00:42:03.351118 | orchestrator |  }, 2026-03-27 00:42:03.351129 | orchestrator |  "sdc": { 2026-03-27 00:42:03.351145 | orchestrator |  "osd_lvm_uuid": "f9aa8e5e-9a1f-5185-aaa5-5b53eb599331" 2026-03-27 00:42:03.351156 | orchestrator |  } 2026-03-27 00:42:03.351167 | orchestrator |  }, 2026-03-27 00:42:03.351178 | orchestrator |  "lvm_volumes": [ 2026-03-27 00:42:03.351189 | orchestrator |  { 2026-03-27 00:42:03.351199 | orchestrator |  "data": "osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62", 2026-03-27 00:42:03.351210 | orchestrator |  "data_vg": "ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62" 2026-03-27 00:42:03.351221 | orchestrator |  }, 2026-03-27 00:42:03.351236 | orchestrator |  { 2026-03-27 00:42:03.351247 | orchestrator |  "data": "osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331", 2026-03-27 00:42:03.351258 | orchestrator |  "data_vg": "ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331" 2026-03-27 00:42:03.351268 | orchestrator |  } 2026-03-27 00:42:03.351279 | orchestrator |  ] 2026-03-27 00:42:03.351290 | orchestrator |  } 2026-03-27 00:42:03.351301 | orchestrator | } 2026-03-27 00:42:03.351312 | orchestrator | 2026-03-27 00:42:03.351322 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-27 00:42:03.351333 | orchestrator | Friday 27 March 2026 00:42:02 +0000 (0:00:00.148) 0:00:39.775 ********** 2026-03-27 00:42:03.351343 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-27 00:42:03.351354 | orchestrator | 2026-03-27 00:42:03.351365 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:42:03.351376 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-27 00:42:03.351388 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-27 00:42:03.351399 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-27 00:42:03.351409 | orchestrator | 2026-03-27 00:42:03.351420 | orchestrator | 2026-03-27 00:42:03.351431 | orchestrator | 2026-03-27 00:42:03.351441 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:42:03.351452 | orchestrator | Friday 27 March 2026 00:42:03 +0000 (0:00:00.785) 0:00:40.561 ********** 2026-03-27 00:42:03.351470 | orchestrator | =============================================================================== 2026-03-27 00:42:03.351480 | orchestrator | Write configuration file ------------------------------------------------ 3.91s 2026-03-27 00:42:03.351491 | orchestrator | Get initial list of available block devices ----------------------------- 1.41s 2026-03-27 00:42:03.351501 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2026-03-27 00:42:03.351512 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2026-03-27 00:42:03.351523 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.93s 2026-03-27 00:42:03.351533 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-03-27 00:42:03.351544 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-03-27 00:42:03.351554 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.76s 2026-03-27 00:42:03.351565 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-03-27 00:42:03.351576 | orchestrator | Set WAL devices config data --------------------------------------------- 0.71s 2026-03-27 00:42:03.351586 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.70s 2026-03-27 00:42:03.351597 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-27 00:42:03.351608 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.65s 2026-03-27 00:42:03.351626 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-03-27 00:42:03.562282 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2026-03-27 00:42:03.562396 | orchestrator | Print configuration data ------------------------------------------------ 0.60s 2026-03-27 00:42:03.562422 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-03-27 00:42:03.562441 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2026-03-27 00:42:03.562461 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2026-03-27 00:42:03.562483 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2026-03-27 00:42:25.118350 | orchestrator | 2026-03-27 00:42:25 | INFO  | Task 2663e179-36c0-49d6-9140-29d74fab7f55 (sync inventory) is running in background. Output coming soon. 2026-03-27 00:42:54.373551 | orchestrator | 2026-03-27 00:42:26 | INFO  | Starting group_vars file reorganization 2026-03-27 00:42:54.373646 | orchestrator | 2026-03-27 00:42:26 | INFO  | Moved 0 file(s) to their respective directories 2026-03-27 00:42:54.373659 | orchestrator | 2026-03-27 00:42:26 | INFO  | Group_vars file reorganization completed 2026-03-27 00:42:54.373669 | orchestrator | 2026-03-27 00:42:29 | INFO  | Starting variable preparation from inventory 2026-03-27 00:42:54.373679 | orchestrator | 2026-03-27 00:42:31 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-27 00:42:54.373688 | orchestrator | 2026-03-27 00:42:31 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-27 00:42:54.373697 | orchestrator | 2026-03-27 00:42:31 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-27 00:42:54.373706 | orchestrator | 2026-03-27 00:42:31 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-27 00:42:54.373714 | orchestrator | 2026-03-27 00:42:31 | INFO  | Variable preparation completed 2026-03-27 00:42:54.373723 | orchestrator | 2026-03-27 00:42:32 | INFO  | Starting inventory overwrite handling 2026-03-27 00:42:54.373732 | orchestrator | 2026-03-27 00:42:32 | INFO  | Handling group overwrites in 99-overwrite 2026-03-27 00:42:54.373741 | orchestrator | 2026-03-27 00:42:32 | INFO  | Removing group frr:children from 60-generic 2026-03-27 00:42:54.373772 | orchestrator | 2026-03-27 00:42:32 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-27 00:42:54.373781 | orchestrator | 2026-03-27 00:42:32 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-27 00:42:54.373790 | orchestrator | 2026-03-27 00:42:32 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-27 00:42:54.373799 | orchestrator | 2026-03-27 00:42:32 | INFO  | Handling group overwrites in 20-roles 2026-03-27 00:42:54.373807 | orchestrator | 2026-03-27 00:42:32 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-27 00:42:54.373816 | orchestrator | 2026-03-27 00:42:32 | INFO  | Removed 5 group(s) in total 2026-03-27 00:42:54.373824 | orchestrator | 2026-03-27 00:42:32 | INFO  | Inventory overwrite handling completed 2026-03-27 00:42:54.373833 | orchestrator | 2026-03-27 00:42:34 | INFO  | Starting merge of inventory files 2026-03-27 00:42:54.373841 | orchestrator | 2026-03-27 00:42:34 | INFO  | Inventory files merged successfully 2026-03-27 00:42:54.373850 | orchestrator | 2026-03-27 00:42:39 | INFO  | Generating minified hosts file 2026-03-27 00:42:54.373858 | orchestrator | 2026-03-27 00:42:40 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-03-27 00:42:54.373868 | orchestrator | 2026-03-27 00:42:40 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-03-27 00:42:54.373892 | orchestrator | 2026-03-27 00:42:42 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-27 00:42:54.373901 | orchestrator | 2026-03-27 00:42:53 | INFO  | Successfully wrote ClusterShell configuration 2026-03-27 00:42:54.373911 | orchestrator | [master 14e2b1b] 2026-03-27-00-42 2026-03-27 00:42:54.373920 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-03-27 00:42:54.373930 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-03-27 00:42:54.373939 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-03-27 00:42:54.373947 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-03-27 00:42:55.806237 | orchestrator | 2026-03-27 00:42:55 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-27 00:42:55.859678 | orchestrator | 2026-03-27 00:42:55 | INFO  | Task 370702ea-7421-4512-8248-4e5030842b7a (ceph-create-lvm-devices) was prepared for execution. 2026-03-27 00:42:55.859778 | orchestrator | 2026-03-27 00:42:55 | INFO  | It takes a moment until task 370702ea-7421-4512-8248-4e5030842b7a (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-27 00:43:07.195019 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-27 00:43:07.195199 | orchestrator | 2.16.14 2026-03-27 00:43:07.195221 | orchestrator | 2026-03-27 00:43:07.195235 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-27 00:43:07.195249 | orchestrator | 2026-03-27 00:43:07.195261 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-27 00:43:07.195274 | orchestrator | Friday 27 March 2026 00:42:59 +0000 (0:00:00.244) 0:00:00.245 ********** 2026-03-27 00:43:07.195287 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 00:43:07.195299 | orchestrator | 2026-03-27 00:43:07.195312 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-27 00:43:07.195325 | orchestrator | Friday 27 March 2026 00:43:00 +0000 (0:00:00.218) 0:00:00.463 ********** 2026-03-27 00:43:07.195338 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:43:07.195351 | orchestrator | 2026-03-27 00:43:07.195363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195375 | orchestrator | Friday 27 March 2026 00:43:00 +0000 (0:00:00.193) 0:00:00.657 ********** 2026-03-27 00:43:07.195410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-27 00:43:07.195421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-27 00:43:07.195432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-27 00:43:07.195444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-27 00:43:07.195455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-27 00:43:07.195480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-27 00:43:07.195488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-27 00:43:07.195494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-27 00:43:07.195500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-27 00:43:07.195506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-27 00:43:07.195512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-27 00:43:07.195518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-27 00:43:07.195524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-27 00:43:07.195530 | orchestrator | 2026-03-27 00:43:07.195536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195542 | orchestrator | Friday 27 March 2026 00:43:00 +0000 (0:00:00.352) 0:00:01.010 ********** 2026-03-27 00:43:07.195548 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.195555 | orchestrator | 2026-03-27 00:43:07.195562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195569 | orchestrator | Friday 27 March 2026 00:43:01 +0000 (0:00:00.505) 0:00:01.516 ********** 2026-03-27 00:43:07.195577 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.195584 | orchestrator | 2026-03-27 00:43:07.195591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195598 | orchestrator | Friday 27 March 2026 00:43:01 +0000 (0:00:00.179) 0:00:01.695 ********** 2026-03-27 00:43:07.195605 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.195612 | orchestrator | 2026-03-27 00:43:07.195619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195626 | orchestrator | Friday 27 March 2026 00:43:01 +0000 (0:00:00.194) 0:00:01.890 ********** 2026-03-27 00:43:07.195633 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.195640 | orchestrator | 2026-03-27 00:43:07.195647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195653 | orchestrator | Friday 27 March 2026 00:43:01 +0000 (0:00:00.174) 0:00:02.065 ********** 2026-03-27 00:43:07.195661 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.195668 | orchestrator | 2026-03-27 00:43:07.195675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195682 | orchestrator | Friday 27 March 2026 00:43:01 +0000 (0:00:00.222) 0:00:02.288 ********** 2026-03-27 00:43:07.195689 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.195696 | orchestrator | 2026-03-27 00:43:07.195704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195711 | orchestrator | Friday 27 March 2026 00:43:02 +0000 (0:00:00.171) 0:00:02.460 ********** 2026-03-27 00:43:07.195721 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.195732 | orchestrator | 2026-03-27 00:43:07.195743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195753 | orchestrator | Friday 27 March 2026 00:43:02 +0000 (0:00:00.199) 0:00:02.659 ********** 2026-03-27 00:43:07.195763 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.195781 | orchestrator | 2026-03-27 00:43:07.195793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195803 | orchestrator | Friday 27 March 2026 00:43:02 +0000 (0:00:00.177) 0:00:02.836 ********** 2026-03-27 00:43:07.195814 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526) 2026-03-27 00:43:07.195826 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526) 2026-03-27 00:43:07.195837 | orchestrator | 2026-03-27 00:43:07.195847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195879 | orchestrator | Friday 27 March 2026 00:43:02 +0000 (0:00:00.396) 0:00:03.233 ********** 2026-03-27 00:43:07.195895 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab) 2026-03-27 00:43:07.195907 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab) 2026-03-27 00:43:07.195916 | orchestrator | 2026-03-27 00:43:07.195926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195937 | orchestrator | Friday 27 March 2026 00:43:03 +0000 (0:00:00.425) 0:00:03.659 ********** 2026-03-27 00:43:07.195948 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d) 2026-03-27 00:43:07.195960 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d) 2026-03-27 00:43:07.195971 | orchestrator | 2026-03-27 00:43:07.195982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.195994 | orchestrator | Friday 27 March 2026 00:43:03 +0000 (0:00:00.629) 0:00:04.289 ********** 2026-03-27 00:43:07.196005 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26) 2026-03-27 00:43:07.196017 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26) 2026-03-27 00:43:07.196028 | orchestrator | 2026-03-27 00:43:07.196039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:07.196095 | orchestrator | Friday 27 March 2026 00:43:04 +0000 (0:00:00.651) 0:00:04.940 ********** 2026-03-27 00:43:07.196105 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-27 00:43:07.196115 | orchestrator | 2026-03-27 00:43:07.196124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:07.196135 | orchestrator | Friday 27 March 2026 00:43:05 +0000 (0:00:00.740) 0:00:05.680 ********** 2026-03-27 00:43:07.196146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-27 00:43:07.196157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-27 00:43:07.196167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-27 00:43:07.196178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-27 00:43:07.196188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-27 00:43:07.196199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-27 00:43:07.196208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-27 00:43:07.196218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-27 00:43:07.196247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-27 00:43:07.196254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-27 00:43:07.196261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-27 00:43:07.196267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-27 00:43:07.196280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-27 00:43:07.196287 | orchestrator | 2026-03-27 00:43:07.196293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:07.196303 | orchestrator | Friday 27 March 2026 00:43:05 +0000 (0:00:00.474) 0:00:06.154 ********** 2026-03-27 00:43:07.196313 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.196323 | orchestrator | 2026-03-27 00:43:07.196334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:07.196344 | orchestrator | Friday 27 March 2026 00:43:05 +0000 (0:00:00.209) 0:00:06.364 ********** 2026-03-27 00:43:07.196353 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.196364 | orchestrator | 2026-03-27 00:43:07.196385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:07.196396 | orchestrator | Friday 27 March 2026 00:43:06 +0000 (0:00:00.195) 0:00:06.559 ********** 2026-03-27 00:43:07.196407 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.196416 | orchestrator | 2026-03-27 00:43:07.196427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:07.196438 | orchestrator | Friday 27 March 2026 00:43:06 +0000 (0:00:00.217) 0:00:06.777 ********** 2026-03-27 00:43:07.196448 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.196458 | orchestrator | 2026-03-27 00:43:07.196469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:07.196479 | orchestrator | Friday 27 March 2026 00:43:06 +0000 (0:00:00.227) 0:00:07.004 ********** 2026-03-27 00:43:07.196490 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.196500 | orchestrator | 2026-03-27 00:43:07.196511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:07.196521 | orchestrator | Friday 27 March 2026 00:43:06 +0000 (0:00:00.204) 0:00:07.209 ********** 2026-03-27 00:43:07.196531 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.196541 | orchestrator | 2026-03-27 00:43:07.196552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:07.196562 | orchestrator | Friday 27 March 2026 00:43:06 +0000 (0:00:00.201) 0:00:07.410 ********** 2026-03-27 00:43:07.196572 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:07.196582 | orchestrator | 2026-03-27 00:43:07.196603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:15.067901 | orchestrator | Friday 27 March 2026 00:43:07 +0000 (0:00:00.198) 0:00:07.609 ********** 2026-03-27 00:43:15.067999 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068009 | orchestrator | 2026-03-27 00:43:15.068016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:15.068023 | orchestrator | Friday 27 March 2026 00:43:07 +0000 (0:00:00.194) 0:00:07.804 ********** 2026-03-27 00:43:15.068029 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-27 00:43:15.068036 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-27 00:43:15.068042 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-27 00:43:15.068067 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-27 00:43:15.068087 | orchestrator | 2026-03-27 00:43:15.068127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:15.068134 | orchestrator | Friday 27 March 2026 00:43:08 +0000 (0:00:01.219) 0:00:09.023 ********** 2026-03-27 00:43:15.068141 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068147 | orchestrator | 2026-03-27 00:43:15.068153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:15.068159 | orchestrator | Friday 27 March 2026 00:43:08 +0000 (0:00:00.221) 0:00:09.244 ********** 2026-03-27 00:43:15.068165 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068171 | orchestrator | 2026-03-27 00:43:15.068177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:15.068202 | orchestrator | Friday 27 March 2026 00:43:09 +0000 (0:00:00.203) 0:00:09.448 ********** 2026-03-27 00:43:15.068208 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068213 | orchestrator | 2026-03-27 00:43:15.068219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:15.068225 | orchestrator | Friday 27 March 2026 00:43:09 +0000 (0:00:00.195) 0:00:09.643 ********** 2026-03-27 00:43:15.068231 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068237 | orchestrator | 2026-03-27 00:43:15.068254 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-27 00:43:15.068260 | orchestrator | Friday 27 March 2026 00:43:09 +0000 (0:00:00.179) 0:00:09.822 ********** 2026-03-27 00:43:15.068265 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068271 | orchestrator | 2026-03-27 00:43:15.068277 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-27 00:43:15.068282 | orchestrator | Friday 27 March 2026 00:43:09 +0000 (0:00:00.122) 0:00:09.945 ********** 2026-03-27 00:43:15.068289 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49c52ee7-6668-5cd2-bd86-f7267953750e'}}) 2026-03-27 00:43:15.068299 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2cf1a901-b2f7-5490-8423-90f944953f5f'}}) 2026-03-27 00:43:15.068308 | orchestrator | 2026-03-27 00:43:15.068318 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-27 00:43:15.068328 | orchestrator | Friday 27 March 2026 00:43:09 +0000 (0:00:00.180) 0:00:10.126 ********** 2026-03-27 00:43:15.068339 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'}) 2026-03-27 00:43:15.068347 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'}) 2026-03-27 00:43:15.068353 | orchestrator | 2026-03-27 00:43:15.068360 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-27 00:43:15.068365 | orchestrator | Friday 27 March 2026 00:43:11 +0000 (0:00:01.965) 0:00:12.091 ********** 2026-03-27 00:43:15.068371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:15.068378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:15.068384 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068390 | orchestrator | 2026-03-27 00:43:15.068395 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-27 00:43:15.068401 | orchestrator | Friday 27 March 2026 00:43:11 +0000 (0:00:00.168) 0:00:12.260 ********** 2026-03-27 00:43:15.068407 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'}) 2026-03-27 00:43:15.068413 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'}) 2026-03-27 00:43:15.068418 | orchestrator | 2026-03-27 00:43:15.068424 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-27 00:43:15.068430 | orchestrator | Friday 27 March 2026 00:43:13 +0000 (0:00:01.439) 0:00:13.700 ********** 2026-03-27 00:43:15.068435 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:15.068441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:15.068447 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068452 | orchestrator | 2026-03-27 00:43:15.068458 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-27 00:43:15.068470 | orchestrator | Friday 27 March 2026 00:43:13 +0000 (0:00:00.146) 0:00:13.846 ********** 2026-03-27 00:43:15.068490 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068497 | orchestrator | 2026-03-27 00:43:15.068504 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-27 00:43:15.068511 | orchestrator | Friday 27 March 2026 00:43:13 +0000 (0:00:00.128) 0:00:13.975 ********** 2026-03-27 00:43:15.068518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:15.068524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:15.068531 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068537 | orchestrator | 2026-03-27 00:43:15.068544 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-27 00:43:15.068550 | orchestrator | Friday 27 March 2026 00:43:13 +0000 (0:00:00.253) 0:00:14.229 ********** 2026-03-27 00:43:15.068557 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068564 | orchestrator | 2026-03-27 00:43:15.068570 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-27 00:43:15.068577 | orchestrator | Friday 27 March 2026 00:43:13 +0000 (0:00:00.131) 0:00:14.360 ********** 2026-03-27 00:43:15.068584 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:15.068591 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:15.068598 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068604 | orchestrator | 2026-03-27 00:43:15.068611 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-27 00:43:15.068618 | orchestrator | Friday 27 March 2026 00:43:14 +0000 (0:00:00.147) 0:00:14.508 ********** 2026-03-27 00:43:15.068624 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068631 | orchestrator | 2026-03-27 00:43:15.068638 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-27 00:43:15.068645 | orchestrator | Friday 27 March 2026 00:43:14 +0000 (0:00:00.136) 0:00:14.644 ********** 2026-03-27 00:43:15.068651 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:15.068658 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:15.068668 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068677 | orchestrator | 2026-03-27 00:43:15.068687 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-27 00:43:15.068698 | orchestrator | Friday 27 March 2026 00:43:14 +0000 (0:00:00.143) 0:00:14.787 ********** 2026-03-27 00:43:15.068707 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:43:15.068757 | orchestrator | 2026-03-27 00:43:15.068766 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-27 00:43:15.068775 | orchestrator | Friday 27 March 2026 00:43:14 +0000 (0:00:00.122) 0:00:14.909 ********** 2026-03-27 00:43:15.068783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:15.068796 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:15.068805 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068814 | orchestrator | 2026-03-27 00:43:15.068823 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-27 00:43:15.068840 | orchestrator | Friday 27 March 2026 00:43:14 +0000 (0:00:00.147) 0:00:15.057 ********** 2026-03-27 00:43:15.068855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:15.068869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:15.068879 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068888 | orchestrator | 2026-03-27 00:43:15.068899 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-27 00:43:15.068908 | orchestrator | Friday 27 March 2026 00:43:14 +0000 (0:00:00.145) 0:00:15.202 ********** 2026-03-27 00:43:15.068917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:15.068926 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:15.068936 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068942 | orchestrator | 2026-03-27 00:43:15.068948 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-27 00:43:15.068954 | orchestrator | Friday 27 March 2026 00:43:14 +0000 (0:00:00.144) 0:00:15.347 ********** 2026-03-27 00:43:15.068968 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:15.068974 | orchestrator | 2026-03-27 00:43:15.068980 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-27 00:43:15.068997 | orchestrator | Friday 27 March 2026 00:43:15 +0000 (0:00:00.138) 0:00:15.485 ********** 2026-03-27 00:43:21.195613 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.195683 | orchestrator | 2026-03-27 00:43:21.195689 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-27 00:43:21.195695 | orchestrator | Friday 27 March 2026 00:43:15 +0000 (0:00:00.130) 0:00:15.616 ********** 2026-03-27 00:43:21.195699 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.195703 | orchestrator | 2026-03-27 00:43:21.195707 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-27 00:43:21.195711 | orchestrator | Friday 27 March 2026 00:43:15 +0000 (0:00:00.124) 0:00:15.741 ********** 2026-03-27 00:43:21.195715 | orchestrator | ok: [testbed-node-3] => { 2026-03-27 00:43:21.195721 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-27 00:43:21.195725 | orchestrator | } 2026-03-27 00:43:21.195729 | orchestrator | 2026-03-27 00:43:21.195733 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-27 00:43:21.195737 | orchestrator | Friday 27 March 2026 00:43:15 +0000 (0:00:00.337) 0:00:16.078 ********** 2026-03-27 00:43:21.195741 | orchestrator | ok: [testbed-node-3] => { 2026-03-27 00:43:21.195745 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-27 00:43:21.195749 | orchestrator | } 2026-03-27 00:43:21.195753 | orchestrator | 2026-03-27 00:43:21.195756 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-27 00:43:21.195760 | orchestrator | Friday 27 March 2026 00:43:15 +0000 (0:00:00.146) 0:00:16.225 ********** 2026-03-27 00:43:21.195764 | orchestrator | ok: [testbed-node-3] => { 2026-03-27 00:43:21.195768 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-27 00:43:21.195772 | orchestrator | } 2026-03-27 00:43:21.195776 | orchestrator | 2026-03-27 00:43:21.195780 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-27 00:43:21.195783 | orchestrator | Friday 27 March 2026 00:43:15 +0000 (0:00:00.147) 0:00:16.372 ********** 2026-03-27 00:43:21.195787 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:43:21.195792 | orchestrator | 2026-03-27 00:43:21.195806 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-27 00:43:21.195810 | orchestrator | Friday 27 March 2026 00:43:16 +0000 (0:00:00.667) 0:00:17.039 ********** 2026-03-27 00:43:21.195825 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:43:21.195829 | orchestrator | 2026-03-27 00:43:21.195833 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-27 00:43:21.195837 | orchestrator | Friday 27 March 2026 00:43:17 +0000 (0:00:00.499) 0:00:17.539 ********** 2026-03-27 00:43:21.195841 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:43:21.195844 | orchestrator | 2026-03-27 00:43:21.195848 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-27 00:43:21.195852 | orchestrator | Friday 27 March 2026 00:43:17 +0000 (0:00:00.533) 0:00:18.073 ********** 2026-03-27 00:43:21.195856 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:43:21.195859 | orchestrator | 2026-03-27 00:43:21.195863 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-27 00:43:21.195867 | orchestrator | Friday 27 March 2026 00:43:17 +0000 (0:00:00.153) 0:00:18.227 ********** 2026-03-27 00:43:21.195871 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.195875 | orchestrator | 2026-03-27 00:43:21.195878 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-27 00:43:21.195882 | orchestrator | Friday 27 March 2026 00:43:17 +0000 (0:00:00.092) 0:00:18.319 ********** 2026-03-27 00:43:21.195886 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.195890 | orchestrator | 2026-03-27 00:43:21.195893 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-27 00:43:21.195897 | orchestrator | Friday 27 March 2026 00:43:17 +0000 (0:00:00.086) 0:00:18.406 ********** 2026-03-27 00:43:21.195901 | orchestrator | ok: [testbed-node-3] => { 2026-03-27 00:43:21.195905 | orchestrator |  "vgs_report": { 2026-03-27 00:43:21.195909 | orchestrator |  "vg": [] 2026-03-27 00:43:21.195913 | orchestrator |  } 2026-03-27 00:43:21.195917 | orchestrator | } 2026-03-27 00:43:21.195920 | orchestrator | 2026-03-27 00:43:21.195924 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-27 00:43:21.195928 | orchestrator | Friday 27 March 2026 00:43:18 +0000 (0:00:00.132) 0:00:18.539 ********** 2026-03-27 00:43:21.195932 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.195935 | orchestrator | 2026-03-27 00:43:21.195939 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-27 00:43:21.195943 | orchestrator | Friday 27 March 2026 00:43:18 +0000 (0:00:00.196) 0:00:18.735 ********** 2026-03-27 00:43:21.195947 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.195950 | orchestrator | 2026-03-27 00:43:21.195954 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-27 00:43:21.195958 | orchestrator | Friday 27 March 2026 00:43:18 +0000 (0:00:00.143) 0:00:18.878 ********** 2026-03-27 00:43:21.195962 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.195966 | orchestrator | 2026-03-27 00:43:21.195969 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-27 00:43:21.195973 | orchestrator | Friday 27 March 2026 00:43:18 +0000 (0:00:00.269) 0:00:19.148 ********** 2026-03-27 00:43:21.195977 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.195980 | orchestrator | 2026-03-27 00:43:21.195984 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-27 00:43:21.195989 | orchestrator | Friday 27 March 2026 00:43:18 +0000 (0:00:00.129) 0:00:19.277 ********** 2026-03-27 00:43:21.195995 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196001 | orchestrator | 2026-03-27 00:43:21.196007 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-27 00:43:21.196013 | orchestrator | Friday 27 March 2026 00:43:18 +0000 (0:00:00.138) 0:00:19.416 ********** 2026-03-27 00:43:21.196018 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196024 | orchestrator | 2026-03-27 00:43:21.196030 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-27 00:43:21.196036 | orchestrator | Friday 27 March 2026 00:43:19 +0000 (0:00:00.132) 0:00:19.548 ********** 2026-03-27 00:43:21.196042 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196110 | orchestrator | 2026-03-27 00:43:21.196115 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-27 00:43:21.196118 | orchestrator | Friday 27 March 2026 00:43:19 +0000 (0:00:00.134) 0:00:19.682 ********** 2026-03-27 00:43:21.196134 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196138 | orchestrator | 2026-03-27 00:43:21.196142 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-27 00:43:21.196145 | orchestrator | Friday 27 March 2026 00:43:19 +0000 (0:00:00.132) 0:00:19.815 ********** 2026-03-27 00:43:21.196149 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196153 | orchestrator | 2026-03-27 00:43:21.196156 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-27 00:43:21.196160 | orchestrator | Friday 27 March 2026 00:43:19 +0000 (0:00:00.124) 0:00:19.939 ********** 2026-03-27 00:43:21.196164 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196168 | orchestrator | 2026-03-27 00:43:21.196171 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-27 00:43:21.196175 | orchestrator | Friday 27 March 2026 00:43:19 +0000 (0:00:00.144) 0:00:20.084 ********** 2026-03-27 00:43:21.196179 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196183 | orchestrator | 2026-03-27 00:43:21.196187 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-27 00:43:21.196191 | orchestrator | Friday 27 March 2026 00:43:19 +0000 (0:00:00.113) 0:00:20.197 ********** 2026-03-27 00:43:21.196196 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196200 | orchestrator | 2026-03-27 00:43:21.196204 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-27 00:43:21.196208 | orchestrator | Friday 27 March 2026 00:43:19 +0000 (0:00:00.139) 0:00:20.336 ********** 2026-03-27 00:43:21.196213 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196217 | orchestrator | 2026-03-27 00:43:21.196221 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-27 00:43:21.196225 | orchestrator | Friday 27 March 2026 00:43:20 +0000 (0:00:00.124) 0:00:20.461 ********** 2026-03-27 00:43:21.196230 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196234 | orchestrator | 2026-03-27 00:43:21.196241 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-27 00:43:21.196246 | orchestrator | Friday 27 March 2026 00:43:20 +0000 (0:00:00.117) 0:00:20.578 ********** 2026-03-27 00:43:21.196251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:21.196258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:21.196262 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196266 | orchestrator | 2026-03-27 00:43:21.196271 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-27 00:43:21.196275 | orchestrator | Friday 27 March 2026 00:43:20 +0000 (0:00:00.146) 0:00:20.724 ********** 2026-03-27 00:43:21.196279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:21.196284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:21.196288 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196292 | orchestrator | 2026-03-27 00:43:21.196296 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-27 00:43:21.196301 | orchestrator | Friday 27 March 2026 00:43:20 +0000 (0:00:00.404) 0:00:21.130 ********** 2026-03-27 00:43:21.196305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:21.196309 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:21.196316 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196320 | orchestrator | 2026-03-27 00:43:21.196325 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-27 00:43:21.196329 | orchestrator | Friday 27 March 2026 00:43:20 +0000 (0:00:00.137) 0:00:21.267 ********** 2026-03-27 00:43:21.196333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:21.196337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:21.196342 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196346 | orchestrator | 2026-03-27 00:43:21.196350 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-27 00:43:21.196354 | orchestrator | Friday 27 March 2026 00:43:20 +0000 (0:00:00.147) 0:00:21.415 ********** 2026-03-27 00:43:21.196359 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:21.196363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:21.196367 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:21.196371 | orchestrator | 2026-03-27 00:43:21.196376 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-27 00:43:21.196380 | orchestrator | Friday 27 March 2026 00:43:21 +0000 (0:00:00.131) 0:00:21.546 ********** 2026-03-27 00:43:21.196387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:27.265387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:27.265489 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:27.265503 | orchestrator | 2026-03-27 00:43:27.265511 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-27 00:43:27.265518 | orchestrator | Friday 27 March 2026 00:43:21 +0000 (0:00:00.153) 0:00:21.700 ********** 2026-03-27 00:43:27.265523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:27.265527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:27.265531 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:27.265535 | orchestrator | 2026-03-27 00:43:27.265539 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-27 00:43:27.265543 | orchestrator | Friday 27 March 2026 00:43:21 +0000 (0:00:00.248) 0:00:21.948 ********** 2026-03-27 00:43:27.265547 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:27.265551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:27.265555 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:27.265558 | orchestrator | 2026-03-27 00:43:27.265562 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-27 00:43:27.265566 | orchestrator | Friday 27 March 2026 00:43:21 +0000 (0:00:00.142) 0:00:22.091 ********** 2026-03-27 00:43:27.265570 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:43:27.265575 | orchestrator | 2026-03-27 00:43:27.265593 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-27 00:43:27.265597 | orchestrator | Friday 27 March 2026 00:43:22 +0000 (0:00:00.524) 0:00:22.615 ********** 2026-03-27 00:43:27.265601 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:43:27.265605 | orchestrator | 2026-03-27 00:43:27.265609 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-27 00:43:27.265624 | orchestrator | Friday 27 March 2026 00:43:22 +0000 (0:00:00.514) 0:00:23.130 ********** 2026-03-27 00:43:27.265628 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:43:27.265632 | orchestrator | 2026-03-27 00:43:27.265636 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-27 00:43:27.265640 | orchestrator | Friday 27 March 2026 00:43:22 +0000 (0:00:00.130) 0:00:23.261 ********** 2026-03-27 00:43:27.265644 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'vg_name': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'}) 2026-03-27 00:43:27.265649 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'vg_name': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'}) 2026-03-27 00:43:27.265652 | orchestrator | 2026-03-27 00:43:27.265657 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-27 00:43:27.265661 | orchestrator | Friday 27 March 2026 00:43:23 +0000 (0:00:00.178) 0:00:23.440 ********** 2026-03-27 00:43:27.265665 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:27.265669 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:27.265672 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:27.265676 | orchestrator | 2026-03-27 00:43:27.265680 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-27 00:43:27.265683 | orchestrator | Friday 27 March 2026 00:43:23 +0000 (0:00:00.133) 0:00:23.573 ********** 2026-03-27 00:43:27.265687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:27.265691 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:27.265695 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:27.265698 | orchestrator | 2026-03-27 00:43:27.265702 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-27 00:43:27.265706 | orchestrator | Friday 27 March 2026 00:43:23 +0000 (0:00:00.336) 0:00:23.909 ********** 2026-03-27 00:43:27.265710 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'})  2026-03-27 00:43:27.265713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'})  2026-03-27 00:43:27.265717 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:43:27.265721 | orchestrator | 2026-03-27 00:43:27.265724 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-27 00:43:27.265728 | orchestrator | Friday 27 March 2026 00:43:23 +0000 (0:00:00.154) 0:00:24.064 ********** 2026-03-27 00:43:27.265743 | orchestrator | ok: [testbed-node-3] => { 2026-03-27 00:43:27.265747 | orchestrator |  "lvm_report": { 2026-03-27 00:43:27.265751 | orchestrator |  "lv": [ 2026-03-27 00:43:27.265756 | orchestrator |  { 2026-03-27 00:43:27.265760 | orchestrator |  "lv_name": "osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f", 2026-03-27 00:43:27.265765 | orchestrator |  "vg_name": "ceph-2cf1a901-b2f7-5490-8423-90f944953f5f" 2026-03-27 00:43:27.265769 | orchestrator |  }, 2026-03-27 00:43:27.265776 | orchestrator |  { 2026-03-27 00:43:27.265780 | orchestrator |  "lv_name": "osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e", 2026-03-27 00:43:27.265783 | orchestrator |  "vg_name": "ceph-49c52ee7-6668-5cd2-bd86-f7267953750e" 2026-03-27 00:43:27.265787 | orchestrator |  } 2026-03-27 00:43:27.265791 | orchestrator |  ], 2026-03-27 00:43:27.265795 | orchestrator |  "pv": [ 2026-03-27 00:43:27.265798 | orchestrator |  { 2026-03-27 00:43:27.265802 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-27 00:43:27.265806 | orchestrator |  "vg_name": "ceph-49c52ee7-6668-5cd2-bd86-f7267953750e" 2026-03-27 00:43:27.265810 | orchestrator |  }, 2026-03-27 00:43:27.265814 | orchestrator |  { 2026-03-27 00:43:27.265818 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-27 00:43:27.265821 | orchestrator |  "vg_name": "ceph-2cf1a901-b2f7-5490-8423-90f944953f5f" 2026-03-27 00:43:27.265825 | orchestrator |  } 2026-03-27 00:43:27.265829 | orchestrator |  ] 2026-03-27 00:43:27.265833 | orchestrator |  } 2026-03-27 00:43:27.265837 | orchestrator | } 2026-03-27 00:43:27.265841 | orchestrator | 2026-03-27 00:43:27.265845 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-27 00:43:27.265849 | orchestrator | 2026-03-27 00:43:27.265852 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-27 00:43:27.265859 | orchestrator | Friday 27 March 2026 00:43:23 +0000 (0:00:00.322) 0:00:24.387 ********** 2026-03-27 00:43:27.265863 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-27 00:43:27.265867 | orchestrator | 2026-03-27 00:43:27.265870 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-27 00:43:27.265874 | orchestrator | Friday 27 March 2026 00:43:24 +0000 (0:00:00.277) 0:00:24.664 ********** 2026-03-27 00:43:27.265878 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:43:27.265882 | orchestrator | 2026-03-27 00:43:27.265885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:27.265889 | orchestrator | Friday 27 March 2026 00:43:24 +0000 (0:00:00.273) 0:00:24.938 ********** 2026-03-27 00:43:27.265893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-27 00:43:27.265897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-27 00:43:27.265900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-27 00:43:27.265904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-27 00:43:27.265908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-27 00:43:27.265912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-27 00:43:27.265915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-27 00:43:27.265919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-27 00:43:27.265923 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-27 00:43:27.265927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-27 00:43:27.265930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-27 00:43:27.265934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-27 00:43:27.265938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-27 00:43:27.265942 | orchestrator | 2026-03-27 00:43:27.265946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:27.265950 | orchestrator | Friday 27 March 2026 00:43:24 +0000 (0:00:00.476) 0:00:25.415 ********** 2026-03-27 00:43:27.265955 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:27.265963 | orchestrator | 2026-03-27 00:43:27.265967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:27.265971 | orchestrator | Friday 27 March 2026 00:43:25 +0000 (0:00:00.233) 0:00:25.648 ********** 2026-03-27 00:43:27.265975 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:27.265980 | orchestrator | 2026-03-27 00:43:27.265984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:27.265988 | orchestrator | Friday 27 March 2026 00:43:25 +0000 (0:00:00.282) 0:00:25.931 ********** 2026-03-27 00:43:27.265992 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:27.265996 | orchestrator | 2026-03-27 00:43:27.266000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:27.266005 | orchestrator | Friday 27 March 2026 00:43:25 +0000 (0:00:00.220) 0:00:26.152 ********** 2026-03-27 00:43:27.266009 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:27.266067 | orchestrator | 2026-03-27 00:43:27.266073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:27.266077 | orchestrator | Friday 27 March 2026 00:43:26 +0000 (0:00:00.950) 0:00:27.102 ********** 2026-03-27 00:43:27.266081 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:27.266085 | orchestrator | 2026-03-27 00:43:27.266089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:27.266093 | orchestrator | Friday 27 March 2026 00:43:26 +0000 (0:00:00.312) 0:00:27.414 ********** 2026-03-27 00:43:27.266098 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:27.266102 | orchestrator | 2026-03-27 00:43:27.266110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:38.703433 | orchestrator | Friday 27 March 2026 00:43:27 +0000 (0:00:00.268) 0:00:27.683 ********** 2026-03-27 00:43:38.703565 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.703592 | orchestrator | 2026-03-27 00:43:38.703610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:38.703624 | orchestrator | Friday 27 March 2026 00:43:27 +0000 (0:00:00.243) 0:00:27.926 ********** 2026-03-27 00:43:38.703633 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.703643 | orchestrator | 2026-03-27 00:43:38.703653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:38.703663 | orchestrator | Friday 27 March 2026 00:43:27 +0000 (0:00:00.226) 0:00:28.153 ********** 2026-03-27 00:43:38.703673 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376) 2026-03-27 00:43:38.703684 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376) 2026-03-27 00:43:38.703693 | orchestrator | 2026-03-27 00:43:38.703703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:38.703712 | orchestrator | Friday 27 March 2026 00:43:28 +0000 (0:00:00.524) 0:00:28.678 ********** 2026-03-27 00:43:38.703722 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231) 2026-03-27 00:43:38.703732 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231) 2026-03-27 00:43:38.703741 | orchestrator | 2026-03-27 00:43:38.703751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:38.703761 | orchestrator | Friday 27 March 2026 00:43:28 +0000 (0:00:00.442) 0:00:29.121 ********** 2026-03-27 00:43:38.703770 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022) 2026-03-27 00:43:38.703780 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022) 2026-03-27 00:43:38.703790 | orchestrator | 2026-03-27 00:43:38.703799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:38.703809 | orchestrator | Friday 27 March 2026 00:43:29 +0000 (0:00:00.418) 0:00:29.540 ********** 2026-03-27 00:43:38.703818 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef) 2026-03-27 00:43:38.703851 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef) 2026-03-27 00:43:38.703861 | orchestrator | 2026-03-27 00:43:38.703871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:38.703880 | orchestrator | Friday 27 March 2026 00:43:29 +0000 (0:00:00.453) 0:00:29.993 ********** 2026-03-27 00:43:38.703889 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-27 00:43:38.703899 | orchestrator | 2026-03-27 00:43:38.703908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.703919 | orchestrator | Friday 27 March 2026 00:43:29 +0000 (0:00:00.391) 0:00:30.385 ********** 2026-03-27 00:43:38.703931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-27 00:43:38.703947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-27 00:43:38.703964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-27 00:43:38.703979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-27 00:43:38.703995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-27 00:43:38.704010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-27 00:43:38.704024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-27 00:43:38.704042 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-27 00:43:38.704142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-27 00:43:38.704162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-27 00:43:38.704175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-27 00:43:38.704185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-27 00:43:38.704195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-27 00:43:38.704204 | orchestrator | 2026-03-27 00:43:38.704214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704223 | orchestrator | Friday 27 March 2026 00:43:30 +0000 (0:00:00.795) 0:00:31.180 ********** 2026-03-27 00:43:38.704234 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704250 | orchestrator | 2026-03-27 00:43:38.704266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704282 | orchestrator | Friday 27 March 2026 00:43:31 +0000 (0:00:00.244) 0:00:31.425 ********** 2026-03-27 00:43:38.704297 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704311 | orchestrator | 2026-03-27 00:43:38.704326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704342 | orchestrator | Friday 27 March 2026 00:43:31 +0000 (0:00:00.197) 0:00:31.623 ********** 2026-03-27 00:43:38.704357 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704371 | orchestrator | 2026-03-27 00:43:38.704411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704427 | orchestrator | Friday 27 March 2026 00:43:31 +0000 (0:00:00.230) 0:00:31.854 ********** 2026-03-27 00:43:38.704442 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704457 | orchestrator | 2026-03-27 00:43:38.704473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704487 | orchestrator | Friday 27 March 2026 00:43:31 +0000 (0:00:00.209) 0:00:32.063 ********** 2026-03-27 00:43:38.704502 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704516 | orchestrator | 2026-03-27 00:43:38.704532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704564 | orchestrator | Friday 27 March 2026 00:43:31 +0000 (0:00:00.207) 0:00:32.270 ********** 2026-03-27 00:43:38.704580 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704596 | orchestrator | 2026-03-27 00:43:38.704612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704628 | orchestrator | Friday 27 March 2026 00:43:32 +0000 (0:00:00.251) 0:00:32.522 ********** 2026-03-27 00:43:38.704645 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704661 | orchestrator | 2026-03-27 00:43:38.704677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704692 | orchestrator | Friday 27 March 2026 00:43:32 +0000 (0:00:00.234) 0:00:32.757 ********** 2026-03-27 00:43:38.704730 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704748 | orchestrator | 2026-03-27 00:43:38.704763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704781 | orchestrator | Friday 27 March 2026 00:43:32 +0000 (0:00:00.237) 0:00:32.994 ********** 2026-03-27 00:43:38.704796 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-27 00:43:38.704811 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-27 00:43:38.704828 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-27 00:43:38.704844 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-27 00:43:38.704860 | orchestrator | 2026-03-27 00:43:38.704874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704888 | orchestrator | Friday 27 March 2026 00:43:33 +0000 (0:00:00.906) 0:00:33.901 ********** 2026-03-27 00:43:38.704902 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704917 | orchestrator | 2026-03-27 00:43:38.704934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.704951 | orchestrator | Friday 27 March 2026 00:43:33 +0000 (0:00:00.220) 0:00:34.121 ********** 2026-03-27 00:43:38.704967 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.704984 | orchestrator | 2026-03-27 00:43:38.705000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.705015 | orchestrator | Friday 27 March 2026 00:43:33 +0000 (0:00:00.253) 0:00:34.375 ********** 2026-03-27 00:43:38.705031 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.705047 | orchestrator | 2026-03-27 00:43:38.705097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:38.705114 | orchestrator | Friday 27 March 2026 00:43:34 +0000 (0:00:00.928) 0:00:35.304 ********** 2026-03-27 00:43:38.705130 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.705146 | orchestrator | 2026-03-27 00:43:38.705162 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-27 00:43:38.705178 | orchestrator | Friday 27 March 2026 00:43:35 +0000 (0:00:00.229) 0:00:35.533 ********** 2026-03-27 00:43:38.705193 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.705208 | orchestrator | 2026-03-27 00:43:38.705221 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-27 00:43:38.705235 | orchestrator | Friday 27 March 2026 00:43:35 +0000 (0:00:00.159) 0:00:35.693 ********** 2026-03-27 00:43:38.705250 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'}}) 2026-03-27 00:43:38.705264 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '627e7bc4-4e7d-5af1-903b-8d115676372d'}}) 2026-03-27 00:43:38.705278 | orchestrator | 2026-03-27 00:43:38.705292 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-27 00:43:38.705306 | orchestrator | Friday 27 March 2026 00:43:35 +0000 (0:00:00.198) 0:00:35.891 ********** 2026-03-27 00:43:38.705322 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'}) 2026-03-27 00:43:38.705338 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'}) 2026-03-27 00:43:38.705371 | orchestrator | 2026-03-27 00:43:38.705385 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-27 00:43:38.705399 | orchestrator | Friday 27 March 2026 00:43:37 +0000 (0:00:01.837) 0:00:37.729 ********** 2026-03-27 00:43:38.705413 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:38.705429 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:38.705443 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:38.705458 | orchestrator | 2026-03-27 00:43:38.705472 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-27 00:43:38.705486 | orchestrator | Friday 27 March 2026 00:43:37 +0000 (0:00:00.169) 0:00:37.898 ********** 2026-03-27 00:43:38.705501 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'}) 2026-03-27 00:43:38.705531 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'}) 2026-03-27 00:43:44.642856 | orchestrator | 2026-03-27 00:43:44.642968 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-27 00:43:44.642985 | orchestrator | Friday 27 March 2026 00:43:38 +0000 (0:00:01.304) 0:00:39.203 ********** 2026-03-27 00:43:44.642997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:44.643010 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:44.643021 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643033 | orchestrator | 2026-03-27 00:43:44.643045 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-27 00:43:44.643111 | orchestrator | Friday 27 March 2026 00:43:38 +0000 (0:00:00.163) 0:00:39.366 ********** 2026-03-27 00:43:44.643124 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643135 | orchestrator | 2026-03-27 00:43:44.643146 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-27 00:43:44.643156 | orchestrator | Friday 27 March 2026 00:43:39 +0000 (0:00:00.155) 0:00:39.521 ********** 2026-03-27 00:43:44.643188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:44.643207 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:44.643224 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643241 | orchestrator | 2026-03-27 00:43:44.643260 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-27 00:43:44.643280 | orchestrator | Friday 27 March 2026 00:43:39 +0000 (0:00:00.165) 0:00:39.686 ********** 2026-03-27 00:43:44.643299 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643316 | orchestrator | 2026-03-27 00:43:44.643332 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-27 00:43:44.643344 | orchestrator | Friday 27 March 2026 00:43:39 +0000 (0:00:00.140) 0:00:39.827 ********** 2026-03-27 00:43:44.643354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:44.643365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:44.643401 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643414 | orchestrator | 2026-03-27 00:43:44.643426 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-27 00:43:44.643439 | orchestrator | Friday 27 March 2026 00:43:39 +0000 (0:00:00.149) 0:00:39.976 ********** 2026-03-27 00:43:44.643451 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643465 | orchestrator | 2026-03-27 00:43:44.643477 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-27 00:43:44.643488 | orchestrator | Friday 27 March 2026 00:43:39 +0000 (0:00:00.410) 0:00:40.386 ********** 2026-03-27 00:43:44.643498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:44.643510 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:44.643520 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643531 | orchestrator | 2026-03-27 00:43:44.643541 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-27 00:43:44.643552 | orchestrator | Friday 27 March 2026 00:43:40 +0000 (0:00:00.157) 0:00:40.544 ********** 2026-03-27 00:43:44.643563 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:43:44.643574 | orchestrator | 2026-03-27 00:43:44.643585 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-27 00:43:44.643596 | orchestrator | Friday 27 March 2026 00:43:40 +0000 (0:00:00.144) 0:00:40.689 ********** 2026-03-27 00:43:44.643607 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:44.643617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:44.643628 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643639 | orchestrator | 2026-03-27 00:43:44.643649 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-27 00:43:44.643660 | orchestrator | Friday 27 March 2026 00:43:40 +0000 (0:00:00.171) 0:00:40.860 ********** 2026-03-27 00:43:44.643671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:44.643682 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:44.643692 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643703 | orchestrator | 2026-03-27 00:43:44.643714 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-27 00:43:44.643744 | orchestrator | Friday 27 March 2026 00:43:40 +0000 (0:00:00.164) 0:00:41.025 ********** 2026-03-27 00:43:44.643756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:44.643767 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:44.643777 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643788 | orchestrator | 2026-03-27 00:43:44.643799 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-27 00:43:44.643809 | orchestrator | Friday 27 March 2026 00:43:40 +0000 (0:00:00.184) 0:00:41.209 ********** 2026-03-27 00:43:44.643820 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643830 | orchestrator | 2026-03-27 00:43:44.643841 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-27 00:43:44.643852 | orchestrator | Friday 27 March 2026 00:43:40 +0000 (0:00:00.150) 0:00:41.359 ********** 2026-03-27 00:43:44.643870 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643881 | orchestrator | 2026-03-27 00:43:44.643891 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-27 00:43:44.643908 | orchestrator | Friday 27 March 2026 00:43:41 +0000 (0:00:00.176) 0:00:41.535 ********** 2026-03-27 00:43:44.643919 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.643930 | orchestrator | 2026-03-27 00:43:44.643941 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-27 00:43:44.643951 | orchestrator | Friday 27 March 2026 00:43:41 +0000 (0:00:00.128) 0:00:41.663 ********** 2026-03-27 00:43:44.643962 | orchestrator | ok: [testbed-node-4] => { 2026-03-27 00:43:44.643973 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-27 00:43:44.643984 | orchestrator | } 2026-03-27 00:43:44.643995 | orchestrator | 2026-03-27 00:43:44.644006 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-27 00:43:44.644017 | orchestrator | Friday 27 March 2026 00:43:41 +0000 (0:00:00.146) 0:00:41.810 ********** 2026-03-27 00:43:44.644028 | orchestrator | ok: [testbed-node-4] => { 2026-03-27 00:43:44.644038 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-27 00:43:44.644049 | orchestrator | } 2026-03-27 00:43:44.644089 | orchestrator | 2026-03-27 00:43:44.644100 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-27 00:43:44.644111 | orchestrator | Friday 27 March 2026 00:43:41 +0000 (0:00:00.139) 0:00:41.950 ********** 2026-03-27 00:43:44.644122 | orchestrator | ok: [testbed-node-4] => { 2026-03-27 00:43:44.644133 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-27 00:43:44.644144 | orchestrator | } 2026-03-27 00:43:44.644154 | orchestrator | 2026-03-27 00:43:44.644165 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-27 00:43:44.644176 | orchestrator | Friday 27 March 2026 00:43:41 +0000 (0:00:00.145) 0:00:42.095 ********** 2026-03-27 00:43:44.644187 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:43:44.644197 | orchestrator | 2026-03-27 00:43:44.644208 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-27 00:43:44.644219 | orchestrator | Friday 27 March 2026 00:43:42 +0000 (0:00:00.773) 0:00:42.869 ********** 2026-03-27 00:43:44.644230 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:43:44.644248 | orchestrator | 2026-03-27 00:43:44.644266 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-27 00:43:44.644285 | orchestrator | Friday 27 March 2026 00:43:42 +0000 (0:00:00.541) 0:00:43.410 ********** 2026-03-27 00:43:44.644304 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:43:44.644323 | orchestrator | 2026-03-27 00:43:44.644343 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-27 00:43:44.644360 | orchestrator | Friday 27 March 2026 00:43:43 +0000 (0:00:00.518) 0:00:43.928 ********** 2026-03-27 00:43:44.644375 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:43:44.644386 | orchestrator | 2026-03-27 00:43:44.644396 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-27 00:43:44.644407 | orchestrator | Friday 27 March 2026 00:43:43 +0000 (0:00:00.167) 0:00:44.096 ********** 2026-03-27 00:43:44.644417 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.644428 | orchestrator | 2026-03-27 00:43:44.644439 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-27 00:43:44.644449 | orchestrator | Friday 27 March 2026 00:43:43 +0000 (0:00:00.106) 0:00:44.202 ********** 2026-03-27 00:43:44.644460 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.644471 | orchestrator | 2026-03-27 00:43:44.644481 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-27 00:43:44.644492 | orchestrator | Friday 27 March 2026 00:43:43 +0000 (0:00:00.099) 0:00:44.302 ********** 2026-03-27 00:43:44.644503 | orchestrator | ok: [testbed-node-4] => { 2026-03-27 00:43:44.644513 | orchestrator |  "vgs_report": { 2026-03-27 00:43:44.644524 | orchestrator |  "vg": [] 2026-03-27 00:43:44.644535 | orchestrator |  } 2026-03-27 00:43:44.644546 | orchestrator | } 2026-03-27 00:43:44.644565 | orchestrator | 2026-03-27 00:43:44.644576 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-27 00:43:44.644586 | orchestrator | Friday 27 March 2026 00:43:44 +0000 (0:00:00.138) 0:00:44.440 ********** 2026-03-27 00:43:44.644597 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.644608 | orchestrator | 2026-03-27 00:43:44.644618 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-27 00:43:44.644629 | orchestrator | Friday 27 March 2026 00:43:44 +0000 (0:00:00.186) 0:00:44.627 ********** 2026-03-27 00:43:44.644639 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.644650 | orchestrator | 2026-03-27 00:43:44.644660 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-27 00:43:44.644671 | orchestrator | Friday 27 March 2026 00:43:44 +0000 (0:00:00.151) 0:00:44.779 ********** 2026-03-27 00:43:44.644681 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.644692 | orchestrator | 2026-03-27 00:43:44.644702 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-27 00:43:44.644713 | orchestrator | Friday 27 March 2026 00:43:44 +0000 (0:00:00.138) 0:00:44.918 ********** 2026-03-27 00:43:44.644724 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:44.644735 | orchestrator | 2026-03-27 00:43:44.644753 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-27 00:43:49.842686 | orchestrator | Friday 27 March 2026 00:43:44 +0000 (0:00:00.141) 0:00:45.059 ********** 2026-03-27 00:43:49.842815 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.842840 | orchestrator | 2026-03-27 00:43:49.842854 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-27 00:43:49.842864 | orchestrator | Friday 27 March 2026 00:43:44 +0000 (0:00:00.138) 0:00:45.198 ********** 2026-03-27 00:43:49.842874 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.842883 | orchestrator | 2026-03-27 00:43:49.842893 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-27 00:43:49.842903 | orchestrator | Friday 27 March 2026 00:43:45 +0000 (0:00:00.444) 0:00:45.642 ********** 2026-03-27 00:43:49.842912 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.842922 | orchestrator | 2026-03-27 00:43:49.842931 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-27 00:43:49.842941 | orchestrator | Friday 27 March 2026 00:43:45 +0000 (0:00:00.168) 0:00:45.811 ********** 2026-03-27 00:43:49.842950 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.842959 | orchestrator | 2026-03-27 00:43:49.842969 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-27 00:43:49.842978 | orchestrator | Friday 27 March 2026 00:43:45 +0000 (0:00:00.164) 0:00:45.976 ********** 2026-03-27 00:43:49.842988 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.842997 | orchestrator | 2026-03-27 00:43:49.843006 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-27 00:43:49.843016 | orchestrator | Friday 27 March 2026 00:43:45 +0000 (0:00:00.148) 0:00:46.124 ********** 2026-03-27 00:43:49.843026 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843035 | orchestrator | 2026-03-27 00:43:49.843044 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-27 00:43:49.843081 | orchestrator | Friday 27 March 2026 00:43:45 +0000 (0:00:00.161) 0:00:46.286 ********** 2026-03-27 00:43:49.843092 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843102 | orchestrator | 2026-03-27 00:43:49.843131 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-27 00:43:49.843142 | orchestrator | Friday 27 March 2026 00:43:46 +0000 (0:00:00.161) 0:00:46.448 ********** 2026-03-27 00:43:49.843151 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843161 | orchestrator | 2026-03-27 00:43:49.843171 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-27 00:43:49.843180 | orchestrator | Friday 27 March 2026 00:43:46 +0000 (0:00:00.149) 0:00:46.598 ********** 2026-03-27 00:43:49.843190 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843224 | orchestrator | 2026-03-27 00:43:49.843235 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-27 00:43:49.843246 | orchestrator | Friday 27 March 2026 00:43:46 +0000 (0:00:00.152) 0:00:46.750 ********** 2026-03-27 00:43:49.843257 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843268 | orchestrator | 2026-03-27 00:43:49.843279 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-27 00:43:49.843290 | orchestrator | Friday 27 March 2026 00:43:46 +0000 (0:00:00.150) 0:00:46.900 ********** 2026-03-27 00:43:49.843303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.843316 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:49.843326 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843337 | orchestrator | 2026-03-27 00:43:49.843348 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-27 00:43:49.843359 | orchestrator | Friday 27 March 2026 00:43:46 +0000 (0:00:00.165) 0:00:47.066 ********** 2026-03-27 00:43:49.843370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.843382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:49.843393 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843402 | orchestrator | 2026-03-27 00:43:49.843411 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-27 00:43:49.843421 | orchestrator | Friday 27 March 2026 00:43:46 +0000 (0:00:00.186) 0:00:47.253 ********** 2026-03-27 00:43:49.843430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.843440 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:49.843449 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843459 | orchestrator | 2026-03-27 00:43:49.843468 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-27 00:43:49.843478 | orchestrator | Friday 27 March 2026 00:43:46 +0000 (0:00:00.164) 0:00:47.417 ********** 2026-03-27 00:43:49.843487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.843497 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:49.843507 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843517 | orchestrator | 2026-03-27 00:43:49.843545 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-27 00:43:49.843555 | orchestrator | Friday 27 March 2026 00:43:47 +0000 (0:00:00.510) 0:00:47.928 ********** 2026-03-27 00:43:49.843564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.843574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:49.843584 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843593 | orchestrator | 2026-03-27 00:43:49.843603 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-27 00:43:49.843612 | orchestrator | Friday 27 March 2026 00:43:47 +0000 (0:00:00.163) 0:00:48.092 ********** 2026-03-27 00:43:49.843628 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.843643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:49.843653 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843662 | orchestrator | 2026-03-27 00:43:49.843671 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-27 00:43:49.843681 | orchestrator | Friday 27 March 2026 00:43:47 +0000 (0:00:00.156) 0:00:48.248 ********** 2026-03-27 00:43:49.843690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.843700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:49.843709 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843719 | orchestrator | 2026-03-27 00:43:49.843728 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-27 00:43:49.843737 | orchestrator | Friday 27 March 2026 00:43:48 +0000 (0:00:00.189) 0:00:48.437 ********** 2026-03-27 00:43:49.843746 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.843756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:49.843766 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.843775 | orchestrator | 2026-03-27 00:43:49.843785 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-27 00:43:49.843794 | orchestrator | Friday 27 March 2026 00:43:48 +0000 (0:00:00.154) 0:00:48.592 ********** 2026-03-27 00:43:49.843804 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:43:49.843813 | orchestrator | 2026-03-27 00:43:49.843823 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-27 00:43:49.843832 | orchestrator | Friday 27 March 2026 00:43:48 +0000 (0:00:00.532) 0:00:49.124 ********** 2026-03-27 00:43:49.843842 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:43:49.843851 | orchestrator | 2026-03-27 00:43:49.843861 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-27 00:43:49.843870 | orchestrator | Friday 27 March 2026 00:43:49 +0000 (0:00:00.531) 0:00:49.656 ********** 2026-03-27 00:43:49.843879 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:43:49.843889 | orchestrator | 2026-03-27 00:43:49.843898 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-27 00:43:49.843908 | orchestrator | Friday 27 March 2026 00:43:49 +0000 (0:00:00.171) 0:00:49.827 ********** 2026-03-27 00:43:49.843917 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'vg_name': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'}) 2026-03-27 00:43:49.843928 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'vg_name': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'}) 2026-03-27 00:43:49.843937 | orchestrator | 2026-03-27 00:43:49.843947 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-27 00:43:49.843956 | orchestrator | Friday 27 March 2026 00:43:49 +0000 (0:00:00.198) 0:00:50.025 ********** 2026-03-27 00:43:49.843966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.843975 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:49.843985 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:49.844000 | orchestrator | 2026-03-27 00:43:49.844010 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-27 00:43:49.844019 | orchestrator | Friday 27 March 2026 00:43:49 +0000 (0:00:00.155) 0:00:50.181 ********** 2026-03-27 00:43:49.844029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:49.844044 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:56.300968 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:56.301083 | orchestrator | 2026-03-27 00:43:56.301099 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-27 00:43:56.301109 | orchestrator | Friday 27 March 2026 00:43:49 +0000 (0:00:00.162) 0:00:50.343 ********** 2026-03-27 00:43:56.301118 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'})  2026-03-27 00:43:56.301128 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'})  2026-03-27 00:43:56.301136 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:43:56.301144 | orchestrator | 2026-03-27 00:43:56.301152 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-27 00:43:56.301164 | orchestrator | Friday 27 March 2026 00:43:50 +0000 (0:00:00.163) 0:00:50.507 ********** 2026-03-27 00:43:56.301178 | orchestrator | ok: [testbed-node-4] => { 2026-03-27 00:43:56.301191 | orchestrator |  "lvm_report": { 2026-03-27 00:43:56.301205 | orchestrator |  "lv": [ 2026-03-27 00:43:56.301234 | orchestrator |  { 2026-03-27 00:43:56.301247 | orchestrator |  "lv_name": "osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d", 2026-03-27 00:43:56.301261 | orchestrator |  "vg_name": "ceph-627e7bc4-4e7d-5af1-903b-8d115676372d" 2026-03-27 00:43:56.301274 | orchestrator |  }, 2026-03-27 00:43:56.301287 | orchestrator |  { 2026-03-27 00:43:56.301301 | orchestrator |  "lv_name": "osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f", 2026-03-27 00:43:56.301314 | orchestrator |  "vg_name": "ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f" 2026-03-27 00:43:56.301328 | orchestrator |  } 2026-03-27 00:43:56.301340 | orchestrator |  ], 2026-03-27 00:43:56.301348 | orchestrator |  "pv": [ 2026-03-27 00:43:56.301356 | orchestrator |  { 2026-03-27 00:43:56.301363 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-27 00:43:56.301371 | orchestrator |  "vg_name": "ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f" 2026-03-27 00:43:56.301379 | orchestrator |  }, 2026-03-27 00:43:56.301387 | orchestrator |  { 2026-03-27 00:43:56.301394 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-27 00:43:56.301402 | orchestrator |  "vg_name": "ceph-627e7bc4-4e7d-5af1-903b-8d115676372d" 2026-03-27 00:43:56.301411 | orchestrator |  } 2026-03-27 00:43:56.301419 | orchestrator |  ] 2026-03-27 00:43:56.301426 | orchestrator |  } 2026-03-27 00:43:56.301434 | orchestrator | } 2026-03-27 00:43:56.301442 | orchestrator | 2026-03-27 00:43:56.301450 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-27 00:43:56.301458 | orchestrator | 2026-03-27 00:43:56.301465 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-27 00:43:56.301473 | orchestrator | Friday 27 March 2026 00:43:50 +0000 (0:00:00.602) 0:00:51.109 ********** 2026-03-27 00:43:56.301481 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-27 00:43:56.301489 | orchestrator | 2026-03-27 00:43:56.301497 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-27 00:43:56.301505 | orchestrator | Friday 27 March 2026 00:43:50 +0000 (0:00:00.239) 0:00:51.349 ********** 2026-03-27 00:43:56.301532 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:43:56.301542 | orchestrator | 2026-03-27 00:43:56.301551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.301560 | orchestrator | Friday 27 March 2026 00:43:51 +0000 (0:00:00.220) 0:00:51.569 ********** 2026-03-27 00:43:56.301569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-27 00:43:56.301577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-27 00:43:56.301586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-27 00:43:56.301598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-27 00:43:56.301608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-27 00:43:56.301617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-27 00:43:56.301626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-27 00:43:56.301634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-27 00:43:56.301643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-27 00:43:56.301654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-27 00:43:56.301667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-27 00:43:56.301679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-27 00:43:56.301693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-27 00:43:56.301707 | orchestrator | 2026-03-27 00:43:56.301721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.301735 | orchestrator | Friday 27 March 2026 00:43:51 +0000 (0:00:00.480) 0:00:52.050 ********** 2026-03-27 00:43:56.301745 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:43:56.301754 | orchestrator | 2026-03-27 00:43:56.301762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.301771 | orchestrator | Friday 27 March 2026 00:43:51 +0000 (0:00:00.214) 0:00:52.264 ********** 2026-03-27 00:43:56.301780 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:43:56.301789 | orchestrator | 2026-03-27 00:43:56.301798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.301824 | orchestrator | Friday 27 March 2026 00:43:52 +0000 (0:00:00.205) 0:00:52.470 ********** 2026-03-27 00:43:56.301834 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:43:56.301841 | orchestrator | 2026-03-27 00:43:56.301849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.301857 | orchestrator | Friday 27 March 2026 00:43:52 +0000 (0:00:00.222) 0:00:52.692 ********** 2026-03-27 00:43:56.301865 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:43:56.301872 | orchestrator | 2026-03-27 00:43:56.301880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.301888 | orchestrator | Friday 27 March 2026 00:43:52 +0000 (0:00:00.211) 0:00:52.904 ********** 2026-03-27 00:43:56.301896 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:43:56.301903 | orchestrator | 2026-03-27 00:43:56.301911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.301919 | orchestrator | Friday 27 March 2026 00:43:52 +0000 (0:00:00.197) 0:00:53.101 ********** 2026-03-27 00:43:56.301926 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:43:56.301934 | orchestrator | 2026-03-27 00:43:56.301942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.301950 | orchestrator | Friday 27 March 2026 00:43:53 +0000 (0:00:00.755) 0:00:53.857 ********** 2026-03-27 00:43:56.301958 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:43:56.301974 | orchestrator | 2026-03-27 00:43:56.301982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.301990 | orchestrator | Friday 27 March 2026 00:43:53 +0000 (0:00:00.208) 0:00:54.065 ********** 2026-03-27 00:43:56.301998 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:43:56.302005 | orchestrator | 2026-03-27 00:43:56.302013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.302089 | orchestrator | Friday 27 March 2026 00:43:53 +0000 (0:00:00.228) 0:00:54.294 ********** 2026-03-27 00:43:56.302098 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b) 2026-03-27 00:43:56.302107 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b) 2026-03-27 00:43:56.302115 | orchestrator | 2026-03-27 00:43:56.302123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.302130 | orchestrator | Friday 27 March 2026 00:43:54 +0000 (0:00:00.438) 0:00:54.732 ********** 2026-03-27 00:43:56.302138 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c) 2026-03-27 00:43:56.302146 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c) 2026-03-27 00:43:56.302154 | orchestrator | 2026-03-27 00:43:56.302161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.302169 | orchestrator | Friday 27 March 2026 00:43:54 +0000 (0:00:00.465) 0:00:55.198 ********** 2026-03-27 00:43:56.302177 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617) 2026-03-27 00:43:56.302184 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617) 2026-03-27 00:43:56.302192 | orchestrator | 2026-03-27 00:43:56.302200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.302207 | orchestrator | Friday 27 March 2026 00:43:55 +0000 (0:00:00.472) 0:00:55.670 ********** 2026-03-27 00:43:56.302215 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984) 2026-03-27 00:43:56.302223 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984) 2026-03-27 00:43:56.302231 | orchestrator | 2026-03-27 00:43:56.302238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-27 00:43:56.302246 | orchestrator | Friday 27 March 2026 00:43:55 +0000 (0:00:00.430) 0:00:56.101 ********** 2026-03-27 00:43:56.302254 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-27 00:43:56.302262 | orchestrator | 2026-03-27 00:43:56.302269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:43:56.302277 | orchestrator | Friday 27 March 2026 00:43:55 +0000 (0:00:00.314) 0:00:56.415 ********** 2026-03-27 00:43:56.302285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-27 00:43:56.302292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-27 00:43:56.302301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-27 00:43:56.302314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-27 00:43:56.302326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-27 00:43:56.302339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-27 00:43:56.302393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-27 00:43:56.302408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-27 00:43:56.302421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-27 00:43:56.302444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-27 00:43:56.302458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-27 00:43:56.302481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-27 00:44:04.550934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-27 00:44:04.551125 | orchestrator | 2026-03-27 00:44:04.551154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551175 | orchestrator | Friday 27 March 2026 00:43:56 +0000 (0:00:00.381) 0:00:56.797 ********** 2026-03-27 00:44:04.551195 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.551215 | orchestrator | 2026-03-27 00:44:04.551234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551254 | orchestrator | Friday 27 March 2026 00:43:56 +0000 (0:00:00.207) 0:00:57.004 ********** 2026-03-27 00:44:04.551273 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.551291 | orchestrator | 2026-03-27 00:44:04.551310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551329 | orchestrator | Friday 27 March 2026 00:43:56 +0000 (0:00:00.174) 0:00:57.178 ********** 2026-03-27 00:44:04.551347 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.551364 | orchestrator | 2026-03-27 00:44:04.551382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551420 | orchestrator | Friday 27 March 2026 00:43:57 +0000 (0:00:00.498) 0:00:57.677 ********** 2026-03-27 00:44:04.551439 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.551459 | orchestrator | 2026-03-27 00:44:04.551478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551499 | orchestrator | Friday 27 March 2026 00:43:57 +0000 (0:00:00.169) 0:00:57.847 ********** 2026-03-27 00:44:04.551519 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.551539 | orchestrator | 2026-03-27 00:44:04.551560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551580 | orchestrator | Friday 27 March 2026 00:43:57 +0000 (0:00:00.184) 0:00:58.031 ********** 2026-03-27 00:44:04.551598 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.551617 | orchestrator | 2026-03-27 00:44:04.551636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551654 | orchestrator | Friday 27 March 2026 00:43:57 +0000 (0:00:00.174) 0:00:58.206 ********** 2026-03-27 00:44:04.551673 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.551691 | orchestrator | 2026-03-27 00:44:04.551709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551727 | orchestrator | Friday 27 March 2026 00:43:57 +0000 (0:00:00.199) 0:00:58.406 ********** 2026-03-27 00:44:04.551745 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.551763 | orchestrator | 2026-03-27 00:44:04.551780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551799 | orchestrator | Friday 27 March 2026 00:43:58 +0000 (0:00:00.199) 0:00:58.606 ********** 2026-03-27 00:44:04.551818 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-27 00:44:04.551839 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-27 00:44:04.551857 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-27 00:44:04.551875 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-27 00:44:04.551893 | orchestrator | 2026-03-27 00:44:04.551911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.551929 | orchestrator | Friday 27 March 2026 00:43:58 +0000 (0:00:00.593) 0:00:59.199 ********** 2026-03-27 00:44:04.551947 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.551959 | orchestrator | 2026-03-27 00:44:04.551970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.552002 | orchestrator | Friday 27 March 2026 00:43:58 +0000 (0:00:00.175) 0:00:59.375 ********** 2026-03-27 00:44:04.552013 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552024 | orchestrator | 2026-03-27 00:44:04.552034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.552045 | orchestrator | Friday 27 March 2026 00:43:59 +0000 (0:00:00.199) 0:00:59.575 ********** 2026-03-27 00:44:04.552055 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552155 | orchestrator | 2026-03-27 00:44:04.552172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-27 00:44:04.552183 | orchestrator | Friday 27 March 2026 00:43:59 +0000 (0:00:00.195) 0:00:59.770 ********** 2026-03-27 00:44:04.552194 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552204 | orchestrator | 2026-03-27 00:44:04.552215 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-27 00:44:04.552226 | orchestrator | Friday 27 March 2026 00:43:59 +0000 (0:00:00.200) 0:00:59.970 ********** 2026-03-27 00:44:04.552236 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552247 | orchestrator | 2026-03-27 00:44:04.552263 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-27 00:44:04.552281 | orchestrator | Friday 27 March 2026 00:43:59 +0000 (0:00:00.355) 0:01:00.326 ********** 2026-03-27 00:44:04.552300 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bb6fbf97-7198-5485-83ee-7be3b389ad62'}}) 2026-03-27 00:44:04.552317 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'}}) 2026-03-27 00:44:04.552334 | orchestrator | 2026-03-27 00:44:04.552351 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-27 00:44:04.552368 | orchestrator | Friday 27 March 2026 00:44:00 +0000 (0:00:00.183) 0:01:00.510 ********** 2026-03-27 00:44:04.552388 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'}) 2026-03-27 00:44:04.552408 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'}) 2026-03-27 00:44:04.552425 | orchestrator | 2026-03-27 00:44:04.552445 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-27 00:44:04.552479 | orchestrator | Friday 27 March 2026 00:44:01 +0000 (0:00:01.871) 0:01:02.381 ********** 2026-03-27 00:44:04.552490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:04.552502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:04.552513 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552524 | orchestrator | 2026-03-27 00:44:04.552534 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-27 00:44:04.552545 | orchestrator | Friday 27 March 2026 00:44:02 +0000 (0:00:00.131) 0:01:02.513 ********** 2026-03-27 00:44:04.552556 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'}) 2026-03-27 00:44:04.552575 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'}) 2026-03-27 00:44:04.552587 | orchestrator | 2026-03-27 00:44:04.552596 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-27 00:44:04.552606 | orchestrator | Friday 27 March 2026 00:44:03 +0000 (0:00:01.327) 0:01:03.840 ********** 2026-03-27 00:44:04.552615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:04.552635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:04.552644 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552654 | orchestrator | 2026-03-27 00:44:04.552663 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-27 00:44:04.552673 | orchestrator | Friday 27 March 2026 00:44:03 +0000 (0:00:00.135) 0:01:03.976 ********** 2026-03-27 00:44:04.552682 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552692 | orchestrator | 2026-03-27 00:44:04.552701 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-27 00:44:04.552710 | orchestrator | Friday 27 March 2026 00:44:03 +0000 (0:00:00.126) 0:01:04.102 ********** 2026-03-27 00:44:04.552720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:04.552729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:04.552739 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552748 | orchestrator | 2026-03-27 00:44:04.552757 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-27 00:44:04.552766 | orchestrator | Friday 27 March 2026 00:44:03 +0000 (0:00:00.154) 0:01:04.257 ********** 2026-03-27 00:44:04.552776 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552785 | orchestrator | 2026-03-27 00:44:04.552795 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-27 00:44:04.552804 | orchestrator | Friday 27 March 2026 00:44:03 +0000 (0:00:00.125) 0:01:04.383 ********** 2026-03-27 00:44:04.552813 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:04.552823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:04.552832 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552842 | orchestrator | 2026-03-27 00:44:04.552851 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-27 00:44:04.552860 | orchestrator | Friday 27 March 2026 00:44:04 +0000 (0:00:00.154) 0:01:04.537 ********** 2026-03-27 00:44:04.552869 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552879 | orchestrator | 2026-03-27 00:44:04.552888 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-27 00:44:04.552897 | orchestrator | Friday 27 March 2026 00:44:04 +0000 (0:00:00.132) 0:01:04.670 ********** 2026-03-27 00:44:04.552907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:04.552916 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:04.552926 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:04.552935 | orchestrator | 2026-03-27 00:44:04.552944 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-27 00:44:04.552954 | orchestrator | Friday 27 March 2026 00:44:04 +0000 (0:00:00.134) 0:01:04.804 ********** 2026-03-27 00:44:04.552963 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:04.552973 | orchestrator | 2026-03-27 00:44:04.552982 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-27 00:44:04.552991 | orchestrator | Friday 27 March 2026 00:44:04 +0000 (0:00:00.117) 0:01:04.922 ********** 2026-03-27 00:44:04.553007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:10.608807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:10.608909 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.608926 | orchestrator | 2026-03-27 00:44:10.608937 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-27 00:44:10.608950 | orchestrator | Friday 27 March 2026 00:44:04 +0000 (0:00:00.320) 0:01:05.242 ********** 2026-03-27 00:44:10.608961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:10.608973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:10.609084 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.609098 | orchestrator | 2026-03-27 00:44:10.609128 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-27 00:44:10.609139 | orchestrator | Friday 27 March 2026 00:44:04 +0000 (0:00:00.153) 0:01:05.396 ********** 2026-03-27 00:44:10.609151 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:10.609163 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:10.609175 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.609187 | orchestrator | 2026-03-27 00:44:10.609197 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-27 00:44:10.609209 | orchestrator | Friday 27 March 2026 00:44:05 +0000 (0:00:00.139) 0:01:05.535 ********** 2026-03-27 00:44:10.609220 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.609231 | orchestrator | 2026-03-27 00:44:10.609244 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-27 00:44:10.609256 | orchestrator | Friday 27 March 2026 00:44:05 +0000 (0:00:00.131) 0:01:05.667 ********** 2026-03-27 00:44:10.609268 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.609280 | orchestrator | 2026-03-27 00:44:10.609292 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-27 00:44:10.609304 | orchestrator | Friday 27 March 2026 00:44:05 +0000 (0:00:00.136) 0:01:05.803 ********** 2026-03-27 00:44:10.609316 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.609328 | orchestrator | 2026-03-27 00:44:10.609341 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-27 00:44:10.609352 | orchestrator | Friday 27 March 2026 00:44:05 +0000 (0:00:00.122) 0:01:05.926 ********** 2026-03-27 00:44:10.609365 | orchestrator | ok: [testbed-node-5] => { 2026-03-27 00:44:10.609379 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-27 00:44:10.609392 | orchestrator | } 2026-03-27 00:44:10.609404 | orchestrator | 2026-03-27 00:44:10.609417 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-27 00:44:10.609428 | orchestrator | Friday 27 March 2026 00:44:05 +0000 (0:00:00.125) 0:01:06.051 ********** 2026-03-27 00:44:10.609440 | orchestrator | ok: [testbed-node-5] => { 2026-03-27 00:44:10.609453 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-27 00:44:10.609465 | orchestrator | } 2026-03-27 00:44:10.609475 | orchestrator | 2026-03-27 00:44:10.609483 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-27 00:44:10.609490 | orchestrator | Friday 27 March 2026 00:44:05 +0000 (0:00:00.157) 0:01:06.208 ********** 2026-03-27 00:44:10.609497 | orchestrator | ok: [testbed-node-5] => { 2026-03-27 00:44:10.609504 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-27 00:44:10.609511 | orchestrator | } 2026-03-27 00:44:10.609518 | orchestrator | 2026-03-27 00:44:10.609526 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-27 00:44:10.609536 | orchestrator | Friday 27 March 2026 00:44:05 +0000 (0:00:00.133) 0:01:06.341 ********** 2026-03-27 00:44:10.609575 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:10.609587 | orchestrator | 2026-03-27 00:44:10.609598 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-27 00:44:10.609609 | orchestrator | Friday 27 March 2026 00:44:06 +0000 (0:00:00.527) 0:01:06.869 ********** 2026-03-27 00:44:10.609619 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:10.609629 | orchestrator | 2026-03-27 00:44:10.609639 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-27 00:44:10.609650 | orchestrator | Friday 27 March 2026 00:44:06 +0000 (0:00:00.522) 0:01:07.391 ********** 2026-03-27 00:44:10.609660 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:10.609671 | orchestrator | 2026-03-27 00:44:10.609681 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-27 00:44:10.609692 | orchestrator | Friday 27 March 2026 00:44:07 +0000 (0:00:00.550) 0:01:07.942 ********** 2026-03-27 00:44:10.609701 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:10.609711 | orchestrator | 2026-03-27 00:44:10.609722 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-27 00:44:10.609732 | orchestrator | Friday 27 March 2026 00:44:07 +0000 (0:00:00.343) 0:01:08.285 ********** 2026-03-27 00:44:10.609742 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.609753 | orchestrator | 2026-03-27 00:44:10.609763 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-27 00:44:10.609773 | orchestrator | Friday 27 March 2026 00:44:07 +0000 (0:00:00.106) 0:01:08.392 ********** 2026-03-27 00:44:10.609784 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.609795 | orchestrator | 2026-03-27 00:44:10.609806 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-27 00:44:10.609817 | orchestrator | Friday 27 March 2026 00:44:08 +0000 (0:00:00.099) 0:01:08.491 ********** 2026-03-27 00:44:10.609828 | orchestrator | ok: [testbed-node-5] => { 2026-03-27 00:44:10.609839 | orchestrator |  "vgs_report": { 2026-03-27 00:44:10.609849 | orchestrator |  "vg": [] 2026-03-27 00:44:10.609882 | orchestrator |  } 2026-03-27 00:44:10.609893 | orchestrator | } 2026-03-27 00:44:10.609904 | orchestrator | 2026-03-27 00:44:10.609914 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-27 00:44:10.609924 | orchestrator | Friday 27 March 2026 00:44:08 +0000 (0:00:00.137) 0:01:08.629 ********** 2026-03-27 00:44:10.609930 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.609936 | orchestrator | 2026-03-27 00:44:10.609942 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-27 00:44:10.609948 | orchestrator | Friday 27 March 2026 00:44:08 +0000 (0:00:00.150) 0:01:08.779 ********** 2026-03-27 00:44:10.609958 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.609968 | orchestrator | 2026-03-27 00:44:10.609978 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-27 00:44:10.609988 | orchestrator | Friday 27 March 2026 00:44:08 +0000 (0:00:00.114) 0:01:08.893 ********** 2026-03-27 00:44:10.609999 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610008 | orchestrator | 2026-03-27 00:44:10.610128 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-27 00:44:10.610138 | orchestrator | Friday 27 March 2026 00:44:08 +0000 (0:00:00.131) 0:01:09.025 ********** 2026-03-27 00:44:10.610144 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610150 | orchestrator | 2026-03-27 00:44:10.610156 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-27 00:44:10.610163 | orchestrator | Friday 27 March 2026 00:44:08 +0000 (0:00:00.136) 0:01:09.162 ********** 2026-03-27 00:44:10.610169 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610175 | orchestrator | 2026-03-27 00:44:10.610181 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-27 00:44:10.610187 | orchestrator | Friday 27 March 2026 00:44:08 +0000 (0:00:00.144) 0:01:09.306 ********** 2026-03-27 00:44:10.610193 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610211 | orchestrator | 2026-03-27 00:44:10.610217 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-27 00:44:10.610223 | orchestrator | Friday 27 March 2026 00:44:09 +0000 (0:00:00.135) 0:01:09.441 ********** 2026-03-27 00:44:10.610229 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610235 | orchestrator | 2026-03-27 00:44:10.610242 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-27 00:44:10.610248 | orchestrator | Friday 27 March 2026 00:44:09 +0000 (0:00:00.133) 0:01:09.575 ********** 2026-03-27 00:44:10.610254 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610260 | orchestrator | 2026-03-27 00:44:10.610266 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-27 00:44:10.610273 | orchestrator | Friday 27 March 2026 00:44:09 +0000 (0:00:00.128) 0:01:09.703 ********** 2026-03-27 00:44:10.610279 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610285 | orchestrator | 2026-03-27 00:44:10.610291 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-27 00:44:10.610297 | orchestrator | Friday 27 March 2026 00:44:09 +0000 (0:00:00.340) 0:01:10.044 ********** 2026-03-27 00:44:10.610303 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610309 | orchestrator | 2026-03-27 00:44:10.610315 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-27 00:44:10.610322 | orchestrator | Friday 27 March 2026 00:44:09 +0000 (0:00:00.123) 0:01:10.168 ********** 2026-03-27 00:44:10.610328 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610334 | orchestrator | 2026-03-27 00:44:10.610340 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-27 00:44:10.610354 | orchestrator | Friday 27 March 2026 00:44:09 +0000 (0:00:00.135) 0:01:10.304 ********** 2026-03-27 00:44:10.610360 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610367 | orchestrator | 2026-03-27 00:44:10.610373 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-27 00:44:10.610379 | orchestrator | Friday 27 March 2026 00:44:10 +0000 (0:00:00.126) 0:01:10.430 ********** 2026-03-27 00:44:10.610385 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610391 | orchestrator | 2026-03-27 00:44:10.610397 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-27 00:44:10.610403 | orchestrator | Friday 27 March 2026 00:44:10 +0000 (0:00:00.124) 0:01:10.554 ********** 2026-03-27 00:44:10.610409 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610416 | orchestrator | 2026-03-27 00:44:10.610422 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-27 00:44:10.610428 | orchestrator | Friday 27 March 2026 00:44:10 +0000 (0:00:00.124) 0:01:10.679 ********** 2026-03-27 00:44:10.610435 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:10.610442 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:10.610448 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610454 | orchestrator | 2026-03-27 00:44:10.610460 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-27 00:44:10.610466 | orchestrator | Friday 27 March 2026 00:44:10 +0000 (0:00:00.158) 0:01:10.837 ********** 2026-03-27 00:44:10.610482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:10.610489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:10.610495 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:10.610501 | orchestrator | 2026-03-27 00:44:10.610507 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-27 00:44:10.610519 | orchestrator | Friday 27 March 2026 00:44:10 +0000 (0:00:00.128) 0:01:10.966 ********** 2026-03-27 00:44:10.610536 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:13.498600 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:13.498725 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:13.498743 | orchestrator | 2026-03-27 00:44:13.498756 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-27 00:44:13.498769 | orchestrator | Friday 27 March 2026 00:44:10 +0000 (0:00:00.148) 0:01:11.115 ********** 2026-03-27 00:44:13.498780 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:13.498823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:13.498879 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:13.498892 | orchestrator | 2026-03-27 00:44:13.498903 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-27 00:44:13.498914 | orchestrator | Friday 27 March 2026 00:44:10 +0000 (0:00:00.132) 0:01:11.247 ********** 2026-03-27 00:44:13.498925 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:13.498936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:13.498947 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:13.498958 | orchestrator | 2026-03-27 00:44:13.498969 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-27 00:44:13.498980 | orchestrator | Friday 27 March 2026 00:44:10 +0000 (0:00:00.148) 0:01:11.396 ********** 2026-03-27 00:44:13.498990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:13.499001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:13.499012 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:13.499022 | orchestrator | 2026-03-27 00:44:13.499033 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-27 00:44:13.499051 | orchestrator | Friday 27 March 2026 00:44:11 +0000 (0:00:00.138) 0:01:11.535 ********** 2026-03-27 00:44:13.499098 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:13.499116 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:13.499135 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:13.499153 | orchestrator | 2026-03-27 00:44:13.499171 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-27 00:44:13.499189 | orchestrator | Friday 27 March 2026 00:44:11 +0000 (0:00:00.291) 0:01:11.826 ********** 2026-03-27 00:44:13.499208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:13.499226 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:13.499245 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:13.499289 | orchestrator | 2026-03-27 00:44:13.499309 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-27 00:44:13.499328 | orchestrator | Friday 27 March 2026 00:44:11 +0000 (0:00:00.137) 0:01:11.964 ********** 2026-03-27 00:44:13.499346 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:13.499365 | orchestrator | 2026-03-27 00:44:13.499384 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-27 00:44:13.499402 | orchestrator | Friday 27 March 2026 00:44:12 +0000 (0:00:00.502) 0:01:12.467 ********** 2026-03-27 00:44:13.499420 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:13.499440 | orchestrator | 2026-03-27 00:44:13.499457 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-27 00:44:13.499476 | orchestrator | Friday 27 March 2026 00:44:12 +0000 (0:00:00.526) 0:01:12.993 ********** 2026-03-27 00:44:13.499487 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:13.499498 | orchestrator | 2026-03-27 00:44:13.499508 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-27 00:44:13.499520 | orchestrator | Friday 27 March 2026 00:44:12 +0000 (0:00:00.154) 0:01:13.148 ********** 2026-03-27 00:44:13.499531 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'vg_name': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'}) 2026-03-27 00:44:13.499543 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'vg_name': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'}) 2026-03-27 00:44:13.499554 | orchestrator | 2026-03-27 00:44:13.499564 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-27 00:44:13.499575 | orchestrator | Friday 27 March 2026 00:44:12 +0000 (0:00:00.181) 0:01:13.329 ********** 2026-03-27 00:44:13.499607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:13.499618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:13.499629 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:13.499640 | orchestrator | 2026-03-27 00:44:13.499651 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-27 00:44:13.499661 | orchestrator | Friday 27 March 2026 00:44:13 +0000 (0:00:00.143) 0:01:13.473 ********** 2026-03-27 00:44:13.499680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:13.499691 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:13.499702 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:13.499713 | orchestrator | 2026-03-27 00:44:13.499723 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-27 00:44:13.499734 | orchestrator | Friday 27 March 2026 00:44:13 +0000 (0:00:00.140) 0:01:13.614 ********** 2026-03-27 00:44:13.499744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'})  2026-03-27 00:44:13.499755 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'})  2026-03-27 00:44:13.499766 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:13.499776 | orchestrator | 2026-03-27 00:44:13.499787 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-27 00:44:13.499797 | orchestrator | Friday 27 March 2026 00:44:13 +0000 (0:00:00.148) 0:01:13.762 ********** 2026-03-27 00:44:13.499808 | orchestrator | ok: [testbed-node-5] => { 2026-03-27 00:44:13.499818 | orchestrator |  "lvm_report": { 2026-03-27 00:44:13.499829 | orchestrator |  "lv": [ 2026-03-27 00:44:13.499849 | orchestrator |  { 2026-03-27 00:44:13.499860 | orchestrator |  "lv_name": "osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62", 2026-03-27 00:44:13.499871 | orchestrator |  "vg_name": "ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62" 2026-03-27 00:44:13.499881 | orchestrator |  }, 2026-03-27 00:44:13.499892 | orchestrator |  { 2026-03-27 00:44:13.499903 | orchestrator |  "lv_name": "osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331", 2026-03-27 00:44:13.499913 | orchestrator |  "vg_name": "ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331" 2026-03-27 00:44:13.499924 | orchestrator |  } 2026-03-27 00:44:13.499934 | orchestrator |  ], 2026-03-27 00:44:13.499945 | orchestrator |  "pv": [ 2026-03-27 00:44:13.499955 | orchestrator |  { 2026-03-27 00:44:13.499966 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-27 00:44:13.499977 | orchestrator |  "vg_name": "ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62" 2026-03-27 00:44:13.499990 | orchestrator |  }, 2026-03-27 00:44:13.500009 | orchestrator |  { 2026-03-27 00:44:13.500027 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-27 00:44:13.500046 | orchestrator |  "vg_name": "ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331" 2026-03-27 00:44:13.500129 | orchestrator |  } 2026-03-27 00:44:13.500154 | orchestrator |  ] 2026-03-27 00:44:13.500169 | orchestrator |  } 2026-03-27 00:44:13.500180 | orchestrator | } 2026-03-27 00:44:13.500191 | orchestrator | 2026-03-27 00:44:13.500202 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:44:13.500213 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-27 00:44:13.500224 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-27 00:44:13.500235 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-27 00:44:13.500246 | orchestrator | 2026-03-27 00:44:13.500257 | orchestrator | 2026-03-27 00:44:13.500267 | orchestrator | 2026-03-27 00:44:13.500278 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:44:13.500289 | orchestrator | Friday 27 March 2026 00:44:13 +0000 (0:00:00.141) 0:01:13.903 ********** 2026-03-27 00:44:13.500299 | orchestrator | =============================================================================== 2026-03-27 00:44:13.500310 | orchestrator | Create block VGs -------------------------------------------------------- 5.67s 2026-03-27 00:44:13.500320 | orchestrator | Create block LVs -------------------------------------------------------- 4.07s 2026-03-27 00:44:13.500331 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.97s 2026-03-27 00:44:13.500342 | orchestrator | Add known partitions to the list of available block devices ------------- 1.65s 2026-03-27 00:44:13.500352 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.60s 2026-03-27 00:44:13.500363 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2026-03-27 00:44:13.500373 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2026-03-27 00:44:13.500384 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2026-03-27 00:44:13.500405 | orchestrator | Add known links to the list of available block devices ------------------ 1.31s 2026-03-27 00:44:13.786157 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2026-03-27 00:44:13.786268 | orchestrator | Print LVM report data --------------------------------------------------- 1.07s 2026-03-27 00:44:13.786287 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-03-27 00:44:13.786301 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-27 00:44:13.786315 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-03-27 00:44:13.786386 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.79s 2026-03-27 00:44:13.786404 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-03-27 00:44:13.786432 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-27 00:44:13.786446 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2026-03-27 00:44:13.786459 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.73s 2026-03-27 00:44:13.786472 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.72s 2026-03-27 00:44:25.200550 | orchestrator | 2026-03-27 00:44:25 | INFO  | Prepare task for execution of facts. 2026-03-27 00:44:25.269992 | orchestrator | 2026-03-27 00:44:25 | INFO  | Task 59dc3b53-51ca-4e6f-a478-0b7373521a3c (facts) was prepared for execution. 2026-03-27 00:44:25.270174 | orchestrator | 2026-03-27 00:44:25 | INFO  | It takes a moment until task 59dc3b53-51ca-4e6f-a478-0b7373521a3c (facts) has been started and output is visible here. 2026-03-27 00:44:36.573417 | orchestrator | 2026-03-27 00:44:36.573499 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-27 00:44:36.573510 | orchestrator | 2026-03-27 00:44:36.573517 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-27 00:44:36.573524 | orchestrator | Friday 27 March 2026 00:44:28 +0000 (0:00:00.320) 0:00:00.320 ********** 2026-03-27 00:44:36.573530 | orchestrator | ok: [testbed-manager] 2026-03-27 00:44:36.573536 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:44:36.573542 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:44:36.573548 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:44:36.573554 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:44:36.573559 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:44:36.573565 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:36.573570 | orchestrator | 2026-03-27 00:44:36.573575 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-27 00:44:36.573581 | orchestrator | Friday 27 March 2026 00:44:29 +0000 (0:00:01.294) 0:00:01.614 ********** 2026-03-27 00:44:36.573586 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:44:36.573593 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:44:36.573598 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:44:36.573603 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:44:36.573609 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:44:36.573614 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:44:36.573620 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:36.573625 | orchestrator | 2026-03-27 00:44:36.573630 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-27 00:44:36.573636 | orchestrator | 2026-03-27 00:44:36.573641 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-27 00:44:36.573647 | orchestrator | Friday 27 March 2026 00:44:30 +0000 (0:00:01.103) 0:00:02.718 ********** 2026-03-27 00:44:36.573652 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:44:36.573657 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:44:36.573663 | orchestrator | ok: [testbed-manager] 2026-03-27 00:44:36.573669 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:44:36.573674 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:44:36.573679 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:44:36.573685 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:44:36.573690 | orchestrator | 2026-03-27 00:44:36.573696 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-27 00:44:36.573701 | orchestrator | 2026-03-27 00:44:36.573706 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-27 00:44:36.573712 | orchestrator | Friday 27 March 2026 00:44:35 +0000 (0:00:05.082) 0:00:07.801 ********** 2026-03-27 00:44:36.573717 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:44:36.573723 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:44:36.573748 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:44:36.573753 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:44:36.573759 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:44:36.573764 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:44:36.573769 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:44:36.573775 | orchestrator | 2026-03-27 00:44:36.573780 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:44:36.573785 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:44:36.573791 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:44:36.573797 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:44:36.573802 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:44:36.573808 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:44:36.573813 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:44:36.573819 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:44:36.573824 | orchestrator | 2026-03-27 00:44:36.573829 | orchestrator | 2026-03-27 00:44:36.573834 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:44:36.573840 | orchestrator | Friday 27 March 2026 00:44:36 +0000 (0:00:00.460) 0:00:08.261 ********** 2026-03-27 00:44:36.573845 | orchestrator | =============================================================================== 2026-03-27 00:44:36.573851 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.08s 2026-03-27 00:44:36.573856 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-03-27 00:44:36.573872 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2026-03-27 00:44:36.573877 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-03-27 00:44:47.929598 | orchestrator | 2026-03-27 00:44:47 | INFO  | Prepare task for execution of frr. 2026-03-27 00:44:48.012289 | orchestrator | 2026-03-27 00:44:48 | INFO  | Task 86f769e4-f6c2-4c69-a982-8b0cdf5c928c (frr) was prepared for execution. 2026-03-27 00:44:48.012390 | orchestrator | 2026-03-27 00:44:48 | INFO  | It takes a moment until task 86f769e4-f6c2-4c69-a982-8b0cdf5c928c (frr) has been started and output is visible here. 2026-03-27 00:45:13.975642 | orchestrator | 2026-03-27 00:45:13.975713 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-27 00:45:13.975720 | orchestrator | 2026-03-27 00:45:13.975724 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-27 00:45:13.975730 | orchestrator | Friday 27 March 2026 00:44:51 +0000 (0:00:00.314) 0:00:00.314 ********** 2026-03-27 00:45:13.975734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-27 00:45:13.975740 | orchestrator | 2026-03-27 00:45:13.975744 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-27 00:45:13.975748 | orchestrator | Friday 27 March 2026 00:44:51 +0000 (0:00:00.221) 0:00:00.536 ********** 2026-03-27 00:45:13.975752 | orchestrator | changed: [testbed-manager] 2026-03-27 00:45:13.975757 | orchestrator | 2026-03-27 00:45:13.975761 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-27 00:45:13.975792 | orchestrator | Friday 27 March 2026 00:44:52 +0000 (0:00:01.447) 0:00:01.984 ********** 2026-03-27 00:45:13.975796 | orchestrator | changed: [testbed-manager] 2026-03-27 00:45:13.975806 | orchestrator | 2026-03-27 00:45:13.975809 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-27 00:45:13.975814 | orchestrator | Friday 27 March 2026 00:45:03 +0000 (0:00:10.383) 0:00:12.367 ********** 2026-03-27 00:45:13.975818 | orchestrator | ok: [testbed-manager] 2026-03-27 00:45:13.975822 | orchestrator | 2026-03-27 00:45:13.975826 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-27 00:45:13.975830 | orchestrator | Friday 27 March 2026 00:45:04 +0000 (0:00:01.104) 0:00:13.471 ********** 2026-03-27 00:45:13.975834 | orchestrator | changed: [testbed-manager] 2026-03-27 00:45:13.975837 | orchestrator | 2026-03-27 00:45:13.975841 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-27 00:45:13.975845 | orchestrator | Friday 27 March 2026 00:45:05 +0000 (0:00:01.045) 0:00:14.517 ********** 2026-03-27 00:45:13.975848 | orchestrator | ok: [testbed-manager] 2026-03-27 00:45:13.975852 | orchestrator | 2026-03-27 00:45:13.975856 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-27 00:45:13.975860 | orchestrator | Friday 27 March 2026 00:45:06 +0000 (0:00:01.349) 0:00:15.867 ********** 2026-03-27 00:45:13.975863 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:45:13.975867 | orchestrator | 2026-03-27 00:45:13.975871 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-27 00:45:13.975874 | orchestrator | Friday 27 March 2026 00:45:06 +0000 (0:00:00.147) 0:00:16.014 ********** 2026-03-27 00:45:13.975878 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:45:13.975882 | orchestrator | 2026-03-27 00:45:13.975885 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-27 00:45:13.975889 | orchestrator | Friday 27 March 2026 00:45:07 +0000 (0:00:00.296) 0:00:16.312 ********** 2026-03-27 00:45:13.975893 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:45:13.975896 | orchestrator | 2026-03-27 00:45:13.975900 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-27 00:45:13.975905 | orchestrator | Friday 27 March 2026 00:45:07 +0000 (0:00:00.139) 0:00:16.451 ********** 2026-03-27 00:45:13.975908 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:45:13.975912 | orchestrator | 2026-03-27 00:45:13.975916 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-27 00:45:13.975920 | orchestrator | Friday 27 March 2026 00:45:07 +0000 (0:00:00.211) 0:00:16.662 ********** 2026-03-27 00:45:13.975923 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:45:13.975927 | orchestrator | 2026-03-27 00:45:13.975931 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-27 00:45:13.975934 | orchestrator | Friday 27 March 2026 00:45:07 +0000 (0:00:00.174) 0:00:16.837 ********** 2026-03-27 00:45:13.975938 | orchestrator | changed: [testbed-manager] 2026-03-27 00:45:13.975942 | orchestrator | 2026-03-27 00:45:13.975945 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-27 00:45:13.975949 | orchestrator | Friday 27 March 2026 00:45:08 +0000 (0:00:00.995) 0:00:17.832 ********** 2026-03-27 00:45:13.975953 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-27 00:45:13.975956 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-27 00:45:13.975961 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-27 00:45:13.975965 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-27 00:45:13.975968 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-27 00:45:13.975972 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-27 00:45:13.975979 | orchestrator | 2026-03-27 00:45:13.975983 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-27 00:45:13.975986 | orchestrator | Friday 27 March 2026 00:45:11 +0000 (0:00:02.372) 0:00:20.205 ********** 2026-03-27 00:45:13.975990 | orchestrator | ok: [testbed-manager] 2026-03-27 00:45:13.975994 | orchestrator | 2026-03-27 00:45:13.975998 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-27 00:45:13.976002 | orchestrator | Friday 27 March 2026 00:45:12 +0000 (0:00:01.208) 0:00:21.413 ********** 2026-03-27 00:45:13.976005 | orchestrator | changed: [testbed-manager] 2026-03-27 00:45:13.976009 | orchestrator | 2026-03-27 00:45:13.976013 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:45:13.976017 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 00:45:13.976021 | orchestrator | 2026-03-27 00:45:13.976025 | orchestrator | 2026-03-27 00:45:13.976039 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:45:13.976044 | orchestrator | Friday 27 March 2026 00:45:13 +0000 (0:00:01.357) 0:00:22.771 ********** 2026-03-27 00:45:13.976047 | orchestrator | =============================================================================== 2026-03-27 00:45:13.976051 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.38s 2026-03-27 00:45:13.976061 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.37s 2026-03-27 00:45:13.976111 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.45s 2026-03-27 00:45:13.976116 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.36s 2026-03-27 00:45:13.976120 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.35s 2026-03-27 00:45:13.976124 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.21s 2026-03-27 00:45:13.976127 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.10s 2026-03-27 00:45:13.976131 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.05s 2026-03-27 00:45:13.976135 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.00s 2026-03-27 00:45:13.976138 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.30s 2026-03-27 00:45:13.976142 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-03-27 00:45:13.976146 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.21s 2026-03-27 00:45:13.976150 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.17s 2026-03-27 00:45:13.976153 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-03-27 00:45:13.976157 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.14s 2026-03-27 00:45:14.134245 | orchestrator | 2026-03-27 00:45:14.135196 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Mar 27 00:45:14 UTC 2026 2026-03-27 00:45:14.135232 | orchestrator | 2026-03-27 00:45:15.232262 | orchestrator | 2026-03-27 00:45:15 | INFO  | Collection nutshell is prepared for execution 2026-03-27 00:45:15.335295 | orchestrator | 2026-03-27 00:45:15 | INFO  | A [0] - dotfiles 2026-03-27 00:45:25.427953 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [0] - homer 2026-03-27 00:45:25.428048 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [0] - netdata 2026-03-27 00:45:25.428061 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [0] - openstackclient 2026-03-27 00:45:25.428237 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [0] - phpmyadmin 2026-03-27 00:45:25.428561 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [0] - common 2026-03-27 00:45:25.433486 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [1] -- loadbalancer 2026-03-27 00:45:25.433834 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [2] --- opensearch 2026-03-27 00:45:25.434263 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [2] --- mariadb-ng 2026-03-27 00:45:25.434762 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [3] ---- horizon 2026-03-27 00:45:25.434968 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [3] ---- keystone 2026-03-27 00:45:25.435691 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [4] ----- neutron 2026-03-27 00:45:25.436463 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [5] ------ wait-for-nova 2026-03-27 00:45:25.436551 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [6] ------- octavia 2026-03-27 00:45:25.438152 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [4] ----- barbican 2026-03-27 00:45:25.438437 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [4] ----- designate 2026-03-27 00:45:25.438739 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [4] ----- ironic 2026-03-27 00:45:25.439462 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [4] ----- placement 2026-03-27 00:45:25.439535 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [4] ----- magnum 2026-03-27 00:45:25.441411 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [1] -- openvswitch 2026-03-27 00:45:25.441515 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [2] --- ovn 2026-03-27 00:45:25.442276 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [1] -- memcached 2026-03-27 00:45:25.442307 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [1] -- redis 2026-03-27 00:45:25.442447 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [1] -- rabbitmq-ng 2026-03-27 00:45:25.443450 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [0] - kubernetes 2026-03-27 00:45:25.446121 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [1] -- kubeconfig 2026-03-27 00:45:25.446152 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [1] -- copy-kubeconfig 2026-03-27 00:45:25.446776 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [0] - ceph 2026-03-27 00:45:25.449024 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [1] -- ceph-pools 2026-03-27 00:45:25.449059 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [2] --- copy-ceph-keys 2026-03-27 00:45:25.449358 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [3] ---- cephclient 2026-03-27 00:45:25.449609 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-27 00:45:25.449803 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [4] ----- wait-for-keystone 2026-03-27 00:45:25.450109 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-27 00:45:25.450208 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [5] ------ glance 2026-03-27 00:45:25.450696 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [5] ------ cinder 2026-03-27 00:45:25.450808 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [5] ------ nova 2026-03-27 00:45:25.451243 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [4] ----- prometheus 2026-03-27 00:45:25.451466 | orchestrator | 2026-03-27 00:45:25 | INFO  | A [5] ------ grafana 2026-03-27 00:45:25.647851 | orchestrator | 2026-03-27 00:45:25 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-27 00:45:25.647919 | orchestrator | 2026-03-27 00:45:25 | INFO  | Tasks are running in the background 2026-03-27 00:45:27.581005 | orchestrator | 2026-03-27 00:45:27 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-27 00:45:29.805287 | orchestrator | 2026-03-27 00:45:29 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:29.805378 | orchestrator | 2026-03-27 00:45:29 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:29.805420 | orchestrator | 2026-03-27 00:45:29 | INFO  | Task a86327c7-4199-4969-9f67-7a2498bc2ee5 is in state STARTED 2026-03-27 00:45:29.819145 | orchestrator | 2026-03-27 00:45:29 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:29.819221 | orchestrator | 2026-03-27 00:45:29 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:29.819231 | orchestrator | 2026-03-27 00:45:29 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:29.819240 | orchestrator | 2026-03-27 00:45:29 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:29.819248 | orchestrator | 2026-03-27 00:45:29 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:45:32.860176 | orchestrator | 2026-03-27 00:45:32 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:32.860275 | orchestrator | 2026-03-27 00:45:32 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:32.860286 | orchestrator | 2026-03-27 00:45:32 | INFO  | Task a86327c7-4199-4969-9f67-7a2498bc2ee5 is in state STARTED 2026-03-27 00:45:32.860294 | orchestrator | 2026-03-27 00:45:32 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:32.860301 | orchestrator | 2026-03-27 00:45:32 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:32.863685 | orchestrator | 2026-03-27 00:45:32 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:32.863743 | orchestrator | 2026-03-27 00:45:32 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:32.863755 | orchestrator | 2026-03-27 00:45:32 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:45:35.944980 | orchestrator | 2026-03-27 00:45:35 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:35.945736 | orchestrator | 2026-03-27 00:45:35 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:35.949382 | orchestrator | 2026-03-27 00:45:35 | INFO  | Task a86327c7-4199-4969-9f67-7a2498bc2ee5 is in state STARTED 2026-03-27 00:45:35.950315 | orchestrator | 2026-03-27 00:45:35 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:35.951283 | orchestrator | 2026-03-27 00:45:35 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:35.955221 | orchestrator | 2026-03-27 00:45:35 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:35.962948 | orchestrator | 2026-03-27 00:45:35 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:35.963107 | orchestrator | 2026-03-27 00:45:35 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:45:39.023471 | orchestrator | 2026-03-27 00:45:39 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:39.030530 | orchestrator | 2026-03-27 00:45:39 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:39.033699 | orchestrator | 2026-03-27 00:45:39 | INFO  | Task a86327c7-4199-4969-9f67-7a2498bc2ee5 is in state STARTED 2026-03-27 00:45:39.035342 | orchestrator | 2026-03-27 00:45:39 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:39.129830 | orchestrator | 2026-03-27 00:45:39 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:39.219700 | orchestrator | 2026-03-27 00:45:39 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:39.219797 | orchestrator | 2026-03-27 00:45:39 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:39.219804 | orchestrator | 2026-03-27 00:45:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:45:42.312462 | orchestrator | 2026-03-27 00:45:42 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:42.313121 | orchestrator | 2026-03-27 00:45:42 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:42.313613 | orchestrator | 2026-03-27 00:45:42 | INFO  | Task a86327c7-4199-4969-9f67-7a2498bc2ee5 is in state STARTED 2026-03-27 00:45:42.315840 | orchestrator | 2026-03-27 00:45:42 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:42.316359 | orchestrator | 2026-03-27 00:45:42 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:42.316913 | orchestrator | 2026-03-27 00:45:42 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:42.317597 | orchestrator | 2026-03-27 00:45:42 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:42.320612 | orchestrator | 2026-03-27 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:45:45.733183 | orchestrator | 2026-03-27 00:45:45 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:45.735153 | orchestrator | 2026-03-27 00:45:45 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:45.736428 | orchestrator | 2026-03-27 00:45:45 | INFO  | Task a86327c7-4199-4969-9f67-7a2498bc2ee5 is in state STARTED 2026-03-27 00:45:45.736465 | orchestrator | 2026-03-27 00:45:45 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:45.737183 | orchestrator | 2026-03-27 00:45:45 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:45.737828 | orchestrator | 2026-03-27 00:45:45 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:45.738683 | orchestrator | 2026-03-27 00:45:45 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:45.738831 | orchestrator | 2026-03-27 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:45:48.810681 | orchestrator | 2026-03-27 00:45:48 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:48.811232 | orchestrator | 2026-03-27 00:45:48 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:48.813217 | orchestrator | 2026-03-27 00:45:48 | INFO  | Task a86327c7-4199-4969-9f67-7a2498bc2ee5 is in state STARTED 2026-03-27 00:45:48.814567 | orchestrator | 2026-03-27 00:45:48 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:48.816590 | orchestrator | 2026-03-27 00:45:48 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:48.820548 | orchestrator | 2026-03-27 00:45:48 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:48.822806 | orchestrator | 2026-03-27 00:45:48 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:48.824515 | orchestrator | 2026-03-27 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:45:51.899681 | orchestrator | 2026-03-27 00:45:51.899754 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-27 00:45:51.899760 | orchestrator | 2026-03-27 00:45:51.899765 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-27 00:45:51.899774 | orchestrator | Friday 27 March 2026 00:45:36 +0000 (0:00:00.319) 0:00:00.319 ********** 2026-03-27 00:45:51.899793 | orchestrator | changed: [testbed-manager] 2026-03-27 00:45:51.899798 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:45:51.899802 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:45:51.899806 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:45:51.899809 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:45:51.899813 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:45:51.899817 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:45:51.899820 | orchestrator | 2026-03-27 00:45:51.899824 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-27 00:45:51.899828 | orchestrator | Friday 27 March 2026 00:45:40 +0000 (0:00:04.660) 0:00:04.979 ********** 2026-03-27 00:45:51.899832 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-27 00:45:51.899837 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-27 00:45:51.899841 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-27 00:45:51.899844 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-27 00:45:51.899848 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-27 00:45:51.899852 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-27 00:45:51.899855 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-27 00:45:51.899859 | orchestrator | 2026-03-27 00:45:51.899863 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-27 00:45:51.899868 | orchestrator | Friday 27 March 2026 00:45:43 +0000 (0:00:02.929) 0:00:07.909 ********** 2026-03-27 00:45:51.899874 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-27 00:45:41.833941', 'end': '2026-03-27 00:45:41.837857', 'delta': '0:00:00.003916', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-27 00:45:51.899879 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-27 00:45:41.384976', 'end': '2026-03-27 00:45:41.389745', 'delta': '0:00:00.004769', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-27 00:45:51.899883 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-27 00:45:42.932169', 'end': '2026-03-27 00:45:42.938857', 'delta': '0:00:00.006688', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-27 00:45:51.899913 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-27 00:45:41.533550', 'end': '2026-03-27 00:45:41.537780', 'delta': '0:00:00.004230', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-27 00:45:51.899918 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-27 00:45:43.490203', 'end': '2026-03-27 00:45:43.498131', 'delta': '0:00:00.007928', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-27 00:45:51.899922 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-27 00:45:42.385939', 'end': '2026-03-27 00:45:42.393654', 'delta': '0:00:00.007715', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-27 00:45:51.899926 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-27 00:45:43.203582', 'end': '2026-03-27 00:45:43.209360', 'delta': '0:00:00.005778', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-27 00:45:51.899930 | orchestrator | 2026-03-27 00:45:51.899934 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-27 00:45:51.899937 | orchestrator | Friday 27 March 2026 00:45:45 +0000 (0:00:01.449) 0:00:09.358 ********** 2026-03-27 00:45:51.899941 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-27 00:45:51.899945 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-27 00:45:51.899949 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-27 00:45:51.899953 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-27 00:45:51.899960 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-27 00:45:51.899964 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-27 00:45:51.899968 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-27 00:45:51.899972 | orchestrator | 2026-03-27 00:45:51.899975 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-27 00:45:51.899979 | orchestrator | Friday 27 March 2026 00:45:47 +0000 (0:00:02.604) 0:00:11.962 ********** 2026-03-27 00:45:51.899983 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-27 00:45:51.899987 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-27 00:45:51.899991 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-27 00:45:51.899995 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-27 00:45:51.899998 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-27 00:45:51.900002 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-27 00:45:51.900006 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-27 00:45:51.900009 | orchestrator | 2026-03-27 00:45:51.900013 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:45:51.900020 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:45:51.900025 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:45:51.900029 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:45:51.900033 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:45:51.900037 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:45:51.900040 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:45:51.900044 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:45:51.900048 | orchestrator | 2026-03-27 00:45:51.900051 | orchestrator | 2026-03-27 00:45:51.900055 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:45:51.900131 | orchestrator | Friday 27 March 2026 00:45:51 +0000 (0:00:03.284) 0:00:15.247 ********** 2026-03-27 00:45:51.900140 | orchestrator | =============================================================================== 2026-03-27 00:45:51.900363 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.66s 2026-03-27 00:45:51.900379 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.28s 2026-03-27 00:45:51.900385 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.93s 2026-03-27 00:45:51.900392 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.60s 2026-03-27 00:45:51.900399 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.45s 2026-03-27 00:45:51.900406 | orchestrator | 2026-03-27 00:45:51 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:51.900412 | orchestrator | 2026-03-27 00:45:51 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:51.900419 | orchestrator | 2026-03-27 00:45:51 | INFO  | Task a86327c7-4199-4969-9f67-7a2498bc2ee5 is in state SUCCESS 2026-03-27 00:45:51.900426 | orchestrator | 2026-03-27 00:45:51 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:51.900432 | orchestrator | 2026-03-27 00:45:51 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:51.900447 | orchestrator | 2026-03-27 00:45:51 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:51.900454 | orchestrator | 2026-03-27 00:45:51 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:51.900460 | orchestrator | 2026-03-27 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:45:55.458732 | orchestrator | 2026-03-27 00:45:54 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:55.458795 | orchestrator | 2026-03-27 00:45:54 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:55.458806 | orchestrator | 2026-03-27 00:45:54 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:55.458813 | orchestrator | 2026-03-27 00:45:54 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:55.458821 | orchestrator | 2026-03-27 00:45:54 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:45:55.458828 | orchestrator | 2026-03-27 00:45:54 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:55.458845 | orchestrator | 2026-03-27 00:45:54 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:55.458853 | orchestrator | 2026-03-27 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:45:58.046956 | orchestrator | 2026-03-27 00:45:58 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:45:58.047015 | orchestrator | 2026-03-27 00:45:58 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:45:58.047684 | orchestrator | 2026-03-27 00:45:58 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:45:58.067830 | orchestrator | 2026-03-27 00:45:58 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:45:58.067885 | orchestrator | 2026-03-27 00:45:58 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:45:58.067891 | orchestrator | 2026-03-27 00:45:58 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:45:58.067896 | orchestrator | 2026-03-27 00:45:58 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:45:58.067920 | orchestrator | 2026-03-27 00:45:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:01.096243 | orchestrator | 2026-03-27 00:46:01 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:01.097757 | orchestrator | 2026-03-27 00:46:01 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:01.099695 | orchestrator | 2026-03-27 00:46:01 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:01.100267 | orchestrator | 2026-03-27 00:46:01 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:46:01.101962 | orchestrator | 2026-03-27 00:46:01 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:01.103961 | orchestrator | 2026-03-27 00:46:01 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:01.105525 | orchestrator | 2026-03-27 00:46:01 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:01.105614 | orchestrator | 2026-03-27 00:46:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:04.160913 | orchestrator | 2026-03-27 00:46:04 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:04.161522 | orchestrator | 2026-03-27 00:46:04 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:04.162292 | orchestrator | 2026-03-27 00:46:04 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:04.163707 | orchestrator | 2026-03-27 00:46:04 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:46:04.165196 | orchestrator | 2026-03-27 00:46:04 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:04.166162 | orchestrator | 2026-03-27 00:46:04 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:04.168682 | orchestrator | 2026-03-27 00:46:04 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:04.168729 | orchestrator | 2026-03-27 00:46:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:07.376707 | orchestrator | 2026-03-27 00:46:07 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:07.376768 | orchestrator | 2026-03-27 00:46:07 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:07.376811 | orchestrator | 2026-03-27 00:46:07 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:07.377378 | orchestrator | 2026-03-27 00:46:07 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:46:07.378505 | orchestrator | 2026-03-27 00:46:07 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:07.381657 | orchestrator | 2026-03-27 00:46:07 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:07.383838 | orchestrator | 2026-03-27 00:46:07 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:07.383893 | orchestrator | 2026-03-27 00:46:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:10.452099 | orchestrator | 2026-03-27 00:46:10 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:10.452568 | orchestrator | 2026-03-27 00:46:10 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:10.453231 | orchestrator | 2026-03-27 00:46:10 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:10.453839 | orchestrator | 2026-03-27 00:46:10 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:46:10.454560 | orchestrator | 2026-03-27 00:46:10 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:10.455097 | orchestrator | 2026-03-27 00:46:10 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:10.455741 | orchestrator | 2026-03-27 00:46:10 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:10.456229 | orchestrator | 2026-03-27 00:46:10 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:13.592074 | orchestrator | 2026-03-27 00:46:13 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:13.592135 | orchestrator | 2026-03-27 00:46:13 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:13.592144 | orchestrator | 2026-03-27 00:46:13 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:13.592150 | orchestrator | 2026-03-27 00:46:13 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state STARTED 2026-03-27 00:46:13.592156 | orchestrator | 2026-03-27 00:46:13 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:13.592180 | orchestrator | 2026-03-27 00:46:13 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:13.592186 | orchestrator | 2026-03-27 00:46:13 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:13.592192 | orchestrator | 2026-03-27 00:46:13 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:16.648159 | orchestrator | 2026-03-27 00:46:16 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:16.654120 | orchestrator | 2026-03-27 00:46:16 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:16.664712 | orchestrator | 2026-03-27 00:46:16 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:16.664751 | orchestrator | 2026-03-27 00:46:16 | INFO  | Task 9073f89d-0ecf-45f4-bc19-bf574ff7a6ab is in state SUCCESS 2026-03-27 00:46:16.664759 | orchestrator | 2026-03-27 00:46:16 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:16.664766 | orchestrator | 2026-03-27 00:46:16 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:16.664773 | orchestrator | 2026-03-27 00:46:16 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:16.664780 | orchestrator | 2026-03-27 00:46:16 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:19.724938 | orchestrator | 2026-03-27 00:46:19 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:19.724985 | orchestrator | 2026-03-27 00:46:19 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:19.724990 | orchestrator | 2026-03-27 00:46:19 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:19.724995 | orchestrator | 2026-03-27 00:46:19 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:19.724998 | orchestrator | 2026-03-27 00:46:19 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:19.725002 | orchestrator | 2026-03-27 00:46:19 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:19.725006 | orchestrator | 2026-03-27 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:22.853261 | orchestrator | 2026-03-27 00:46:22 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:22.853337 | orchestrator | 2026-03-27 00:46:22 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:22.853343 | orchestrator | 2026-03-27 00:46:22 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:22.853355 | orchestrator | 2026-03-27 00:46:22 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:22.853360 | orchestrator | 2026-03-27 00:46:22 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:22.853378 | orchestrator | 2026-03-27 00:46:22 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:22.853383 | orchestrator | 2026-03-27 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:25.870398 | orchestrator | 2026-03-27 00:46:25 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:25.871101 | orchestrator | 2026-03-27 00:46:25 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:25.872297 | orchestrator | 2026-03-27 00:46:25 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:25.873302 | orchestrator | 2026-03-27 00:46:25 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:25.874315 | orchestrator | 2026-03-27 00:46:25 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:25.875400 | orchestrator | 2026-03-27 00:46:25 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:25.875434 | orchestrator | 2026-03-27 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:28.944235 | orchestrator | 2026-03-27 00:46:28 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:28.944818 | orchestrator | 2026-03-27 00:46:28 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state STARTED 2026-03-27 00:46:28.945436 | orchestrator | 2026-03-27 00:46:28 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:28.947142 | orchestrator | 2026-03-27 00:46:28 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:28.947863 | orchestrator | 2026-03-27 00:46:28 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:28.949825 | orchestrator | 2026-03-27 00:46:28 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:28.949864 | orchestrator | 2026-03-27 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:32.117915 | orchestrator | 2026-03-27 00:46:32 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:32.118205 | orchestrator | 2026-03-27 00:46:32 | INFO  | Task e390a705-608d-4a72-a562-7bd3072afcf3 is in state SUCCESS 2026-03-27 00:46:32.129271 | orchestrator | 2026-03-27 00:46:32 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:32.143909 | orchestrator | 2026-03-27 00:46:32 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:32.149408 | orchestrator | 2026-03-27 00:46:32 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:32.152326 | orchestrator | 2026-03-27 00:46:32 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:32.152410 | orchestrator | 2026-03-27 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:35.196201 | orchestrator | 2026-03-27 00:46:35 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:35.196905 | orchestrator | 2026-03-27 00:46:35 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:35.198307 | orchestrator | 2026-03-27 00:46:35 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:35.200669 | orchestrator | 2026-03-27 00:46:35 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:35.201901 | orchestrator | 2026-03-27 00:46:35 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:35.201933 | orchestrator | 2026-03-27 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:38.247910 | orchestrator | 2026-03-27 00:46:38 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:38.250325 | orchestrator | 2026-03-27 00:46:38 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:38.251717 | orchestrator | 2026-03-27 00:46:38 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:38.252933 | orchestrator | 2026-03-27 00:46:38 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:38.256246 | orchestrator | 2026-03-27 00:46:38 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:38.256286 | orchestrator | 2026-03-27 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:41.293949 | orchestrator | 2026-03-27 00:46:41 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:41.294931 | orchestrator | 2026-03-27 00:46:41 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:41.295596 | orchestrator | 2026-03-27 00:46:41 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:41.300476 | orchestrator | 2026-03-27 00:46:41 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:41.302144 | orchestrator | 2026-03-27 00:46:41 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:41.302175 | orchestrator | 2026-03-27 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:44.356258 | orchestrator | 2026-03-27 00:46:44 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:44.358255 | orchestrator | 2026-03-27 00:46:44 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:44.360140 | orchestrator | 2026-03-27 00:46:44 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:44.363476 | orchestrator | 2026-03-27 00:46:44 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:44.364319 | orchestrator | 2026-03-27 00:46:44 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:44.364624 | orchestrator | 2026-03-27 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:47.413253 | orchestrator | 2026-03-27 00:46:47 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:47.413520 | orchestrator | 2026-03-27 00:46:47 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:47.416133 | orchestrator | 2026-03-27 00:46:47 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:47.416509 | orchestrator | 2026-03-27 00:46:47 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:47.417542 | orchestrator | 2026-03-27 00:46:47 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:47.417566 | orchestrator | 2026-03-27 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:50.497500 | orchestrator | 2026-03-27 00:46:50 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:50.499171 | orchestrator | 2026-03-27 00:46:50 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:50.499259 | orchestrator | 2026-03-27 00:46:50 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:50.499426 | orchestrator | 2026-03-27 00:46:50 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:50.500148 | orchestrator | 2026-03-27 00:46:50 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:50.500580 | orchestrator | 2026-03-27 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:53.534541 | orchestrator | 2026-03-27 00:46:53 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:53.537547 | orchestrator | 2026-03-27 00:46:53 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:53.537602 | orchestrator | 2026-03-27 00:46:53 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:53.538830 | orchestrator | 2026-03-27 00:46:53 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:53.541565 | orchestrator | 2026-03-27 00:46:53 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:53.541629 | orchestrator | 2026-03-27 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:56.591628 | orchestrator | 2026-03-27 00:46:56 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:56.596998 | orchestrator | 2026-03-27 00:46:56 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:56.605023 | orchestrator | 2026-03-27 00:46:56 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state STARTED 2026-03-27 00:46:56.607395 | orchestrator | 2026-03-27 00:46:56 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:56.608711 | orchestrator | 2026-03-27 00:46:56 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:56.608742 | orchestrator | 2026-03-27 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:46:59.665333 | orchestrator | 2026-03-27 00:46:59 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:46:59.670609 | orchestrator | 2026-03-27 00:46:59 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:46:59.670925 | orchestrator | 2026-03-27 00:46:59 | INFO  | Task 581c75a1-cba1-4961-95f3-a5292dbb993f is in state SUCCESS 2026-03-27 00:46:59.671816 | orchestrator | 2026-03-27 00:46:59.671847 | orchestrator | 2026-03-27 00:46:59.671853 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-27 00:46:59.671860 | orchestrator | 2026-03-27 00:46:59.671865 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-27 00:46:59.671870 | orchestrator | Friday 27 March 2026 00:45:36 +0000 (0:00:01.130) 0:00:01.130 ********** 2026-03-27 00:46:59.671877 | orchestrator | ok: [testbed-manager] => { 2026-03-27 00:46:59.671884 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-27 00:46:59.671890 | orchestrator | } 2026-03-27 00:46:59.671895 | orchestrator | 2026-03-27 00:46:59.671901 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-27 00:46:59.671906 | orchestrator | Friday 27 March 2026 00:45:37 +0000 (0:00:01.172) 0:00:02.302 ********** 2026-03-27 00:46:59.671912 | orchestrator | ok: [testbed-manager] 2026-03-27 00:46:59.671918 | orchestrator | 2026-03-27 00:46:59.671923 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-27 00:46:59.671928 | orchestrator | Friday 27 March 2026 00:45:40 +0000 (0:00:02.058) 0:00:04.361 ********** 2026-03-27 00:46:59.671931 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-27 00:46:59.671935 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-27 00:46:59.671939 | orchestrator | 2026-03-27 00:46:59.671945 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-27 00:46:59.671950 | orchestrator | Friday 27 March 2026 00:45:41 +0000 (0:00:01.634) 0:00:05.995 ********** 2026-03-27 00:46:59.671955 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.671960 | orchestrator | 2026-03-27 00:46:59.671965 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-27 00:46:59.671971 | orchestrator | Friday 27 March 2026 00:45:44 +0000 (0:00:02.799) 0:00:08.795 ********** 2026-03-27 00:46:59.671976 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.671981 | orchestrator | 2026-03-27 00:46:59.671986 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-27 00:46:59.672003 | orchestrator | Friday 27 March 2026 00:45:45 +0000 (0:00:01.394) 0:00:10.190 ********** 2026-03-27 00:46:59.672008 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-27 00:46:59.672013 | orchestrator | ok: [testbed-manager] 2026-03-27 00:46:59.672018 | orchestrator | 2026-03-27 00:46:59.672024 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-27 00:46:59.672042 | orchestrator | Friday 27 March 2026 00:46:13 +0000 (0:00:27.459) 0:00:37.650 ********** 2026-03-27 00:46:59.672050 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.672055 | orchestrator | 2026-03-27 00:46:59.672060 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:46:59.672066 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:46:59.672072 | orchestrator | 2026-03-27 00:46:59.672077 | orchestrator | 2026-03-27 00:46:59.672082 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:46:59.672088 | orchestrator | Friday 27 March 2026 00:46:16 +0000 (0:00:02.885) 0:00:40.535 ********** 2026-03-27 00:46:59.672093 | orchestrator | =============================================================================== 2026-03-27 00:46:59.672098 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.46s 2026-03-27 00:46:59.672103 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.89s 2026-03-27 00:46:59.672108 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.80s 2026-03-27 00:46:59.672113 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.06s 2026-03-27 00:46:59.672118 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.63s 2026-03-27 00:46:59.672123 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.39s 2026-03-27 00:46:59.672128 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 1.17s 2026-03-27 00:46:59.672133 | orchestrator | 2026-03-27 00:46:59.672138 | orchestrator | 2026-03-27 00:46:59.672143 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-27 00:46:59.672148 | orchestrator | 2026-03-27 00:46:59.672153 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-27 00:46:59.672158 | orchestrator | Friday 27 March 2026 00:45:38 +0000 (0:00:01.196) 0:00:01.196 ********** 2026-03-27 00:46:59.672164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-27 00:46:59.672170 | orchestrator | 2026-03-27 00:46:59.672175 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-27 00:46:59.672180 | orchestrator | Friday 27 March 2026 00:45:38 +0000 (0:00:00.372) 0:00:01.568 ********** 2026-03-27 00:46:59.672184 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-27 00:46:59.672187 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-27 00:46:59.672190 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-27 00:46:59.672194 | orchestrator | 2026-03-27 00:46:59.672197 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-27 00:46:59.672200 | orchestrator | Friday 27 March 2026 00:45:40 +0000 (0:00:02.243) 0:00:03.812 ********** 2026-03-27 00:46:59.672203 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.672206 | orchestrator | 2026-03-27 00:46:59.672209 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-27 00:46:59.672212 | orchestrator | Friday 27 March 2026 00:45:43 +0000 (0:00:02.484) 0:00:06.296 ********** 2026-03-27 00:46:59.672221 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-27 00:46:59.672225 | orchestrator | ok: [testbed-manager] 2026-03-27 00:46:59.672228 | orchestrator | 2026-03-27 00:46:59.672234 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-27 00:46:59.672238 | orchestrator | Friday 27 March 2026 00:46:19 +0000 (0:00:36.332) 0:00:42.629 ********** 2026-03-27 00:46:59.672241 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.672244 | orchestrator | 2026-03-27 00:46:59.672247 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-27 00:46:59.672250 | orchestrator | Friday 27 March 2026 00:46:20 +0000 (0:00:01.487) 0:00:44.116 ********** 2026-03-27 00:46:59.672267 | orchestrator | ok: [testbed-manager] 2026-03-27 00:46:59.672272 | orchestrator | 2026-03-27 00:46:59.672281 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-27 00:46:59.672286 | orchestrator | Friday 27 March 2026 00:46:22 +0000 (0:00:01.901) 0:00:46.017 ********** 2026-03-27 00:46:59.672291 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.672296 | orchestrator | 2026-03-27 00:46:59.672301 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-27 00:46:59.672307 | orchestrator | Friday 27 March 2026 00:46:26 +0000 (0:00:03.692) 0:00:49.710 ********** 2026-03-27 00:46:59.672312 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.672317 | orchestrator | 2026-03-27 00:46:59.672322 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-27 00:46:59.672327 | orchestrator | Friday 27 March 2026 00:46:27 +0000 (0:00:00.958) 0:00:50.668 ********** 2026-03-27 00:46:59.672330 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.672333 | orchestrator | 2026-03-27 00:46:59.672336 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-27 00:46:59.672339 | orchestrator | Friday 27 March 2026 00:46:28 +0000 (0:00:00.663) 0:00:51.332 ********** 2026-03-27 00:46:59.672342 | orchestrator | ok: [testbed-manager] 2026-03-27 00:46:59.672345 | orchestrator | 2026-03-27 00:46:59.672348 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:46:59.672351 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:46:59.672354 | orchestrator | 2026-03-27 00:46:59.672357 | orchestrator | 2026-03-27 00:46:59.672360 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:46:59.672363 | orchestrator | Friday 27 March 2026 00:46:28 +0000 (0:00:00.498) 0:00:51.830 ********** 2026-03-27 00:46:59.672367 | orchestrator | =============================================================================== 2026-03-27 00:46:59.672370 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.33s 2026-03-27 00:46:59.672373 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.69s 2026-03-27 00:46:59.672376 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.48s 2026-03-27 00:46:59.672379 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.24s 2026-03-27 00:46:59.672382 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.90s 2026-03-27 00:46:59.672385 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.49s 2026-03-27 00:46:59.672388 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.96s 2026-03-27 00:46:59.672391 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.66s 2026-03-27 00:46:59.672394 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.50s 2026-03-27 00:46:59.672397 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.37s 2026-03-27 00:46:59.672400 | orchestrator | 2026-03-27 00:46:59.672403 | orchestrator | 2026-03-27 00:46:59.672406 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-27 00:46:59.672409 | orchestrator | 2026-03-27 00:46:59.672412 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-27 00:46:59.672415 | orchestrator | Friday 27 March 2026 00:45:55 +0000 (0:00:00.312) 0:00:00.312 ********** 2026-03-27 00:46:59.672421 | orchestrator | ok: [testbed-manager] 2026-03-27 00:46:59.672425 | orchestrator | 2026-03-27 00:46:59.672428 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-27 00:46:59.672431 | orchestrator | Friday 27 March 2026 00:45:57 +0000 (0:00:01.882) 0:00:02.195 ********** 2026-03-27 00:46:59.672434 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-27 00:46:59.672438 | orchestrator | 2026-03-27 00:46:59.672441 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-27 00:46:59.672445 | orchestrator | Friday 27 March 2026 00:45:59 +0000 (0:00:01.446) 0:00:03.642 ********** 2026-03-27 00:46:59.672448 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.672452 | orchestrator | 2026-03-27 00:46:59.672455 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-27 00:46:59.672459 | orchestrator | Friday 27 March 2026 00:46:01 +0000 (0:00:02.082) 0:00:05.724 ********** 2026-03-27 00:46:59.672462 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-27 00:46:59.672466 | orchestrator | ok: [testbed-manager] 2026-03-27 00:46:59.672469 | orchestrator | 2026-03-27 00:46:59.672472 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-27 00:46:59.672476 | orchestrator | Friday 27 March 2026 00:46:55 +0000 (0:00:53.695) 0:00:59.419 ********** 2026-03-27 00:46:59.672479 | orchestrator | changed: [testbed-manager] 2026-03-27 00:46:59.672483 | orchestrator | 2026-03-27 00:46:59.672489 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:46:59.672492 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:46:59.672496 | orchestrator | 2026-03-27 00:46:59.672499 | orchestrator | 2026-03-27 00:46:59.672503 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:46:59.672510 | orchestrator | Friday 27 March 2026 00:46:58 +0000 (0:00:03.248) 0:01:02.668 ********** 2026-03-27 00:46:59.672514 | orchestrator | =============================================================================== 2026-03-27 00:46:59.672517 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.70s 2026-03-27 00:46:59.672521 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.25s 2026-03-27 00:46:59.672524 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.08s 2026-03-27 00:46:59.672527 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.88s 2026-03-27 00:46:59.672531 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.45s 2026-03-27 00:46:59.673162 | orchestrator | 2026-03-27 00:46:59 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:46:59.677838 | orchestrator | 2026-03-27 00:46:59 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:46:59.677892 | orchestrator | 2026-03-27 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:02.720579 | orchestrator | 2026-03-27 00:47:02 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:47:02.720633 | orchestrator | 2026-03-27 00:47:02 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:02.721627 | orchestrator | 2026-03-27 00:47:02 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:02.722757 | orchestrator | 2026-03-27 00:47:02 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:02.722790 | orchestrator | 2026-03-27 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:05.771378 | orchestrator | 2026-03-27 00:47:05 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:47:05.772873 | orchestrator | 2026-03-27 00:47:05 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:05.774850 | orchestrator | 2026-03-27 00:47:05 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:05.776324 | orchestrator | 2026-03-27 00:47:05 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:05.776382 | orchestrator | 2026-03-27 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:08.808196 | orchestrator | 2026-03-27 00:47:08 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:47:08.809529 | orchestrator | 2026-03-27 00:47:08 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:08.810931 | orchestrator | 2026-03-27 00:47:08 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:08.814771 | orchestrator | 2026-03-27 00:47:08 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:08.814849 | orchestrator | 2026-03-27 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:11.849095 | orchestrator | 2026-03-27 00:47:11 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state STARTED 2026-03-27 00:47:11.850135 | orchestrator | 2026-03-27 00:47:11 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:11.850842 | orchestrator | 2026-03-27 00:47:11 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:11.851803 | orchestrator | 2026-03-27 00:47:11 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:11.851844 | orchestrator | 2026-03-27 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:14.893480 | orchestrator | 2026-03-27 00:47:14 | INFO  | Task fff27476-fb0b-4f57-8098-668548240b4a is in state SUCCESS 2026-03-27 00:47:14.894663 | orchestrator | 2026-03-27 00:47:14.894717 | orchestrator | 2026-03-27 00:47:14.894724 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:47:14.894731 | orchestrator | 2026-03-27 00:47:14.894737 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:47:14.894743 | orchestrator | Friday 27 March 2026 00:45:37 +0000 (0:00:01.220) 0:00:01.220 ********** 2026-03-27 00:47:14.894749 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-27 00:47:14.894755 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-27 00:47:14.894760 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-27 00:47:14.894770 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-27 00:47:14.894776 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-27 00:47:14.894782 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-27 00:47:14.894788 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-27 00:47:14.894793 | orchestrator | 2026-03-27 00:47:14.894799 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-27 00:47:14.894803 | orchestrator | 2026-03-27 00:47:14.894808 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-27 00:47:14.894813 | orchestrator | Friday 27 March 2026 00:45:38 +0000 (0:00:01.811) 0:00:03.032 ********** 2026-03-27 00:47:14.894825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:47:14.894834 | orchestrator | 2026-03-27 00:47:14.894912 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-27 00:47:14.894917 | orchestrator | Friday 27 March 2026 00:45:40 +0000 (0:00:02.035) 0:00:05.068 ********** 2026-03-27 00:47:14.894934 | orchestrator | ok: [testbed-manager] 2026-03-27 00:47:14.894940 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:47:14.894945 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:47:14.894951 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:47:14.894956 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:47:14.894961 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:47:14.894966 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:47:14.894971 | orchestrator | 2026-03-27 00:47:14.894976 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-27 00:47:14.894984 | orchestrator | Friday 27 March 2026 00:45:44 +0000 (0:00:04.065) 0:00:09.133 ********** 2026-03-27 00:47:14.894990 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:47:14.894995 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:47:14.895001 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:47:14.895006 | orchestrator | ok: [testbed-manager] 2026-03-27 00:47:14.895012 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:47:14.895017 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:47:14.895054 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:47:14.895060 | orchestrator | 2026-03-27 00:47:14.895065 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-27 00:47:14.895070 | orchestrator | Friday 27 March 2026 00:45:48 +0000 (0:00:03.581) 0:00:12.715 ********** 2026-03-27 00:47:14.895075 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:14.895083 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:14.895089 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:14.895093 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:14.895098 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:14.895103 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:14.895108 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:14.895112 | orchestrator | 2026-03-27 00:47:14.895119 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-27 00:47:14.895124 | orchestrator | Friday 27 March 2026 00:45:51 +0000 (0:00:02.892) 0:00:15.607 ********** 2026-03-27 00:47:14.895129 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:14.895135 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:14.895140 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:14.895146 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:14.895152 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:14.895157 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:14.895163 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:14.895174 | orchestrator | 2026-03-27 00:47:14.895180 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-27 00:47:14.895188 | orchestrator | Friday 27 March 2026 00:46:02 +0000 (0:00:10.937) 0:00:26.544 ********** 2026-03-27 00:47:14.895194 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:14.895200 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:14.895205 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:14.895211 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:14.895216 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:14.895222 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:14.895227 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:14.895233 | orchestrator | 2026-03-27 00:47:14.895238 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-27 00:47:14.895243 | orchestrator | Friday 27 March 2026 00:46:45 +0000 (0:00:42.703) 0:01:09.248 ********** 2026-03-27 00:47:14.895250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:47:14.895256 | orchestrator | 2026-03-27 00:47:14.895262 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-27 00:47:14.895267 | orchestrator | Friday 27 March 2026 00:46:46 +0000 (0:00:01.370) 0:01:10.619 ********** 2026-03-27 00:47:14.895273 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-27 00:47:14.895285 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-27 00:47:14.895290 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-27 00:47:14.895296 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-27 00:47:14.895313 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-27 00:47:14.895318 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-27 00:47:14.895324 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-27 00:47:14.895329 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-27 00:47:14.895335 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-27 00:47:14.895340 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-27 00:47:14.895345 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-27 00:47:14.895351 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-27 00:47:14.895360 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-27 00:47:14.895366 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-27 00:47:14.895371 | orchestrator | 2026-03-27 00:47:14.895376 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-27 00:47:14.895383 | orchestrator | Friday 27 March 2026 00:46:50 +0000 (0:00:04.244) 0:01:14.863 ********** 2026-03-27 00:47:14.895389 | orchestrator | ok: [testbed-manager] 2026-03-27 00:47:14.895442 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:47:14.895449 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:47:14.895454 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:47:14.895460 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:47:14.895464 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:47:14.895469 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:47:14.895474 | orchestrator | 2026-03-27 00:47:14.895480 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-27 00:47:14.895485 | orchestrator | Friday 27 March 2026 00:46:52 +0000 (0:00:01.385) 0:01:16.248 ********** 2026-03-27 00:47:14.895491 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:14.895496 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:14.895501 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:14.895507 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:14.895512 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:14.895517 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:14.895523 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:14.895528 | orchestrator | 2026-03-27 00:47:14.895533 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-27 00:47:14.895539 | orchestrator | Friday 27 March 2026 00:46:53 +0000 (0:00:01.201) 0:01:17.450 ********** 2026-03-27 00:47:14.895544 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:47:14.895550 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:47:14.895555 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:47:14.895560 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:47:14.895566 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:47:14.895571 | orchestrator | ok: [testbed-manager] 2026-03-27 00:47:14.895577 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:47:14.895582 | orchestrator | 2026-03-27 00:47:14.895587 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-27 00:47:14.895593 | orchestrator | Friday 27 March 2026 00:46:55 +0000 (0:00:01.742) 0:01:19.193 ********** 2026-03-27 00:47:14.895598 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:47:14.895603 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:47:14.895609 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:47:14.895614 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:47:14.895619 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:47:14.895625 | orchestrator | ok: [testbed-manager] 2026-03-27 00:47:14.895630 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:47:14.895635 | orchestrator | 2026-03-27 00:47:14.895641 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-27 00:47:14.895646 | orchestrator | Friday 27 March 2026 00:46:57 +0000 (0:00:02.501) 0:01:21.695 ********** 2026-03-27 00:47:14.895656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-27 00:47:14.895663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:47:14.895669 | orchestrator | 2026-03-27 00:47:14.895674 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-27 00:47:14.895680 | orchestrator | Friday 27 March 2026 00:46:59 +0000 (0:00:02.218) 0:01:23.914 ********** 2026-03-27 00:47:14.895685 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:14.895691 | orchestrator | 2026-03-27 00:47:14.895696 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-27 00:47:14.895701 | orchestrator | Friday 27 March 2026 00:47:02 +0000 (0:00:02.501) 0:01:26.415 ********** 2026-03-27 00:47:14.895707 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:14.895712 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:14.895718 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:14.895723 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:14.895728 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:14.895734 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:14.895739 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:14.895745 | orchestrator | 2026-03-27 00:47:14.895750 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:47:14.895756 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:47:14.895762 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:47:14.895768 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:47:14.895773 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:47:14.895784 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:47:14.895790 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:47:14.895795 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:47:14.895801 | orchestrator | 2026-03-27 00:47:14.895806 | orchestrator | 2026-03-27 00:47:14.895811 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:47:14.895819 | orchestrator | Friday 27 March 2026 00:47:13 +0000 (0:00:11.023) 0:01:37.439 ********** 2026-03-27 00:47:14.895825 | orchestrator | =============================================================================== 2026-03-27 00:47:14.895830 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 42.70s 2026-03-27 00:47:14.895836 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.02s 2026-03-27 00:47:14.895841 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.94s 2026-03-27 00:47:14.895845 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.24s 2026-03-27 00:47:14.895850 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 4.07s 2026-03-27 00:47:14.895855 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.58s 2026-03-27 00:47:14.895862 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.89s 2026-03-27 00:47:14.895870 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.50s 2026-03-27 00:47:14.895879 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.50s 2026-03-27 00:47:14.895885 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.22s 2026-03-27 00:47:14.895890 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.04s 2026-03-27 00:47:14.895895 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.81s 2026-03-27 00:47:14.895900 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.74s 2026-03-27 00:47:14.895907 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.39s 2026-03-27 00:47:14.895912 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.37s 2026-03-27 00:47:14.895919 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.20s 2026-03-27 00:47:14.897089 | orchestrator | 2026-03-27 00:47:14 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:14.898600 | orchestrator | 2026-03-27 00:47:14 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:14.901945 | orchestrator | 2026-03-27 00:47:14 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:14.902008 | orchestrator | 2026-03-27 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:17.948880 | orchestrator | 2026-03-27 00:47:17 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:17.950774 | orchestrator | 2026-03-27 00:47:17 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:17.951502 | orchestrator | 2026-03-27 00:47:17 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:17.952384 | orchestrator | 2026-03-27 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:21.003550 | orchestrator | 2026-03-27 00:47:21 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:21.004951 | orchestrator | 2026-03-27 00:47:21 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:21.009710 | orchestrator | 2026-03-27 00:47:21 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:21.009898 | orchestrator | 2026-03-27 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:24.047830 | orchestrator | 2026-03-27 00:47:24 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:24.049076 | orchestrator | 2026-03-27 00:47:24 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:24.050682 | orchestrator | 2026-03-27 00:47:24 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:24.050736 | orchestrator | 2026-03-27 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:27.103751 | orchestrator | 2026-03-27 00:47:27 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:27.103811 | orchestrator | 2026-03-27 00:47:27 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:27.107879 | orchestrator | 2026-03-27 00:47:27 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:27.107931 | orchestrator | 2026-03-27 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:30.150273 | orchestrator | 2026-03-27 00:47:30 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:30.150925 | orchestrator | 2026-03-27 00:47:30 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:30.151752 | orchestrator | 2026-03-27 00:47:30 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:30.151798 | orchestrator | 2026-03-27 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:33.200535 | orchestrator | 2026-03-27 00:47:33 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:33.202470 | orchestrator | 2026-03-27 00:47:33 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:33.204351 | orchestrator | 2026-03-27 00:47:33 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:33.204403 | orchestrator | 2026-03-27 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:36.250135 | orchestrator | 2026-03-27 00:47:36 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:36.250460 | orchestrator | 2026-03-27 00:47:36 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state STARTED 2026-03-27 00:47:36.251994 | orchestrator | 2026-03-27 00:47:36 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:36.252086 | orchestrator | 2026-03-27 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:39.294495 | orchestrator | 2026-03-27 00:47:39 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:39.307274 | orchestrator | 2026-03-27 00:47:39 | INFO  | Task 52b3d8bb-8745-4181-9894-68de12852887 is in state SUCCESS 2026-03-27 00:47:39.309449 | orchestrator | 2026-03-27 00:47:39.309496 | orchestrator | 2026-03-27 00:47:39.309504 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-27 00:47:39.309510 | orchestrator | 2026-03-27 00:47:39.309515 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-27 00:47:39.309521 | orchestrator | Friday 27 March 2026 00:45:29 +0000 (0:00:00.359) 0:00:00.359 ********** 2026-03-27 00:47:39.309527 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:47:39.309533 | orchestrator | 2026-03-27 00:47:39.309538 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-27 00:47:39.309581 | orchestrator | Friday 27 March 2026 00:45:30 +0000 (0:00:01.259) 0:00:01.618 ********** 2026-03-27 00:47:39.309585 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-27 00:47:39.309588 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-27 00:47:39.309592 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-27 00:47:39.309595 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-27 00:47:39.309598 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-27 00:47:39.309601 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-27 00:47:39.309604 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-27 00:47:39.309607 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-27 00:47:39.309611 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-27 00:47:39.309614 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-27 00:47:39.309617 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-27 00:47:39.309620 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-27 00:47:39.309656 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-27 00:47:39.309669 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-27 00:47:39.309672 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-27 00:47:39.309675 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-27 00:47:39.309678 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-27 00:47:39.309682 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-27 00:47:39.309685 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-27 00:47:39.309688 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-27 00:47:39.309691 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-27 00:47:39.309694 | orchestrator | 2026-03-27 00:47:39.309697 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-27 00:47:39.309700 | orchestrator | Friday 27 March 2026 00:45:34 +0000 (0:00:03.887) 0:00:05.506 ********** 2026-03-27 00:47:39.309703 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:47:39.309707 | orchestrator | 2026-03-27 00:47:39.309710 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-27 00:47:39.309718 | orchestrator | Friday 27 March 2026 00:45:36 +0000 (0:00:01.551) 0:00:07.058 ********** 2026-03-27 00:47:39.309723 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.309728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.309743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.309746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.309756 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.309762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.309765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.309769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309795 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309798 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309829 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309832 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.309836 | orchestrator | 2026-03-27 00:47:39.309839 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-27 00:47:39.309842 | orchestrator | Friday 27 March 2026 00:45:41 +0000 (0:00:04.702) 0:00:11.760 ********** 2026-03-27 00:47:39.309845 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309848 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309853 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309856 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:47:39.309860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309875 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:47:39.309878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309887 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:47:39.309892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309902 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:47:39.309907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309918 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:47:39.309921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309932 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:47:39.309935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309949 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:47:39.309952 | orchestrator | 2026-03-27 00:47:39.309956 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-27 00:47:39.309959 | orchestrator | Friday 27 March 2026 00:45:43 +0000 (0:00:02.345) 0:00:14.105 ********** 2026-03-27 00:47:39.309962 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309981 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.309992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.309999 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:47:39.310080 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:47:39.310121 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:47:39.310125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.310129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.310133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.310136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.310998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.311059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.311066 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:47:39.311072 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:47:39.311169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.311180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.311185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.311190 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:47:39.311203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-27 00:47:39.311211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.311226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.311231 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:47:39.311236 | orchestrator | 2026-03-27 00:47:39.311241 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-27 00:47:39.311247 | orchestrator | Friday 27 March 2026 00:45:47 +0000 (0:00:03.897) 0:00:18.003 ********** 2026-03-27 00:47:39.311252 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:47:39.311256 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:47:39.311261 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:47:39.311266 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:47:39.311271 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:47:39.311284 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:47:39.311289 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:47:39.311294 | orchestrator | 2026-03-27 00:47:39.311300 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-27 00:47:39.311308 | orchestrator | Friday 27 March 2026 00:45:48 +0000 (0:00:01.070) 0:00:19.073 ********** 2026-03-27 00:47:39.311313 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:47:39.311318 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:47:39.311322 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:47:39.311327 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:47:39.311332 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:47:39.311336 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:47:39.311341 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:47:39.311345 | orchestrator | 2026-03-27 00:47:39.311351 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-27 00:47:39.311356 | orchestrator | Friday 27 March 2026 00:45:49 +0000 (0:00:00.915) 0:00:19.989 ********** 2026-03-27 00:47:39.311361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311407 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311519 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311541 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311546 | orchestrator | 2026-03-27 00:47:39.311551 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-27 00:47:39.311556 | orchestrator | Friday 27 March 2026 00:45:57 +0000 (0:00:08.154) 0:00:28.144 ********** 2026-03-27 00:47:39.311561 | orchestrator | [WARNING]: Skipped 2026-03-27 00:47:39.311567 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-27 00:47:39.311573 | orchestrator | to this access issue: 2026-03-27 00:47:39.311578 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-27 00:47:39.311582 | orchestrator | directory 2026-03-27 00:47:39.311588 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 00:47:39.311593 | orchestrator | 2026-03-27 00:47:39.311598 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-27 00:47:39.311603 | orchestrator | Friday 27 March 2026 00:45:58 +0000 (0:00:01.338) 0:00:29.482 ********** 2026-03-27 00:47:39.311608 | orchestrator | [WARNING]: Skipped 2026-03-27 00:47:39.311613 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-27 00:47:39.311745 | orchestrator | to this access issue: 2026-03-27 00:47:39.311751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-27 00:47:39.311754 | orchestrator | directory 2026-03-27 00:47:39.311758 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 00:47:39.311762 | orchestrator | 2026-03-27 00:47:39.311765 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-27 00:47:39.311768 | orchestrator | Friday 27 March 2026 00:45:59 +0000 (0:00:01.187) 0:00:30.669 ********** 2026-03-27 00:47:39.311772 | orchestrator | [WARNING]: Skipped 2026-03-27 00:47:39.311776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-27 00:47:39.311779 | orchestrator | to this access issue: 2026-03-27 00:47:39.311783 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-27 00:47:39.311786 | orchestrator | directory 2026-03-27 00:47:39.311789 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 00:47:39.311793 | orchestrator | 2026-03-27 00:47:39.311796 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-27 00:47:39.311800 | orchestrator | Friday 27 March 2026 00:46:00 +0000 (0:00:01.045) 0:00:31.715 ********** 2026-03-27 00:47:39.311807 | orchestrator | [WARNING]: Skipped 2026-03-27 00:47:39.311810 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-27 00:47:39.311814 | orchestrator | to this access issue: 2026-03-27 00:47:39.311817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-27 00:47:39.311821 | orchestrator | directory 2026-03-27 00:47:39.311824 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 00:47:39.311828 | orchestrator | 2026-03-27 00:47:39.311831 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-27 00:47:39.311835 | orchestrator | Friday 27 March 2026 00:46:01 +0000 (0:00:00.832) 0:00:32.548 ********** 2026-03-27 00:47:39.311838 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:39.311842 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:39.311846 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:39.311849 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:39.311852 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:39.311856 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:39.311859 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:39.311862 | orchestrator | 2026-03-27 00:47:39.311865 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-27 00:47:39.311869 | orchestrator | Friday 27 March 2026 00:46:06 +0000 (0:00:04.789) 0:00:37.337 ********** 2026-03-27 00:47:39.311872 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-27 00:47:39.311876 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-27 00:47:39.311880 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-27 00:47:39.311883 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-27 00:47:39.311887 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-27 00:47:39.311890 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-27 00:47:39.311894 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-27 00:47:39.311898 | orchestrator | 2026-03-27 00:47:39.311901 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-27 00:47:39.311905 | orchestrator | Friday 27 March 2026 00:46:09 +0000 (0:00:03.007) 0:00:40.345 ********** 2026-03-27 00:47:39.311908 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:39.311912 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:39.311915 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:39.311919 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:39.311922 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:39.311925 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:39.311931 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:39.311935 | orchestrator | 2026-03-27 00:47:39.311938 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-27 00:47:39.311941 | orchestrator | Friday 27 March 2026 00:46:13 +0000 (0:00:03.991) 0:00:44.338 ********** 2026-03-27 00:47:39.311945 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.311957 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311961 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.311965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311969 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.311973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.311986 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.311990 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.312057 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.312068 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312074 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312080 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312089 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.312105 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:47:39.312112 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312115 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312120 | orchestrator | 2026-03-27 00:47:39.312127 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-27 00:47:39.312133 | orchestrator | Friday 27 March 2026 00:46:16 +0000 (0:00:03.240) 0:00:47.578 ********** 2026-03-27 00:47:39.312138 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-27 00:47:39.312143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-27 00:47:39.312147 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-27 00:47:39.312152 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-27 00:47:39.312182 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-27 00:47:39.312188 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-27 00:47:39.312194 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-27 00:47:39.312199 | orchestrator | 2026-03-27 00:47:39.312204 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-27 00:47:39.312209 | orchestrator | Friday 27 March 2026 00:46:19 +0000 (0:00:02.750) 0:00:50.328 ********** 2026-03-27 00:47:39.312214 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-27 00:47:39.312219 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-27 00:47:39.312229 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-27 00:47:39.312238 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-27 00:47:39.312243 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-27 00:47:39.312248 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-27 00:47:39.312253 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-27 00:47:39.312260 | orchestrator | 2026-03-27 00:47:39.312265 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-27 00:47:39.312270 | orchestrator | Friday 27 March 2026 00:46:22 +0000 (0:00:02.808) 0:00:53.137 ********** 2026-03-27 00:47:39.312275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312300 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312322 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-27 00:47:39.312331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312362 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312398 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312409 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:47:39.312423 | orchestrator | 2026-03-27 00:47:39.312428 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-27 00:47:39.312433 | orchestrator | Friday 27 March 2026 00:46:25 +0000 (0:00:03.492) 0:00:56.629 ********** 2026-03-27 00:47:39.312438 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:39.312443 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:39.312448 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:39.312453 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:39.312458 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:39.312463 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:39.312468 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:39.312474 | orchestrator | 2026-03-27 00:47:39.312479 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-27 00:47:39.312484 | orchestrator | Friday 27 March 2026 00:46:27 +0000 (0:00:02.074) 0:00:58.704 ********** 2026-03-27 00:47:39.312489 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:39.312494 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:39.312499 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:39.312504 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:39.312509 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:39.312514 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:39.312519 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:39.312524 | orchestrator | 2026-03-27 00:47:39.312532 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-27 00:47:39.312537 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:01.551) 0:01:00.256 ********** 2026-03-27 00:47:39.312541 | orchestrator | 2026-03-27 00:47:39.312547 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-27 00:47:39.312552 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.063) 0:01:00.319 ********** 2026-03-27 00:47:39.312612 | orchestrator | 2026-03-27 00:47:39.312618 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-27 00:47:39.312623 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.061) 0:01:00.381 ********** 2026-03-27 00:47:39.312628 | orchestrator | 2026-03-27 00:47:39.312634 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-27 00:47:39.312638 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.062) 0:01:00.444 ********** 2026-03-27 00:47:39.312644 | orchestrator | 2026-03-27 00:47:39.312649 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-27 00:47:39.312654 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.062) 0:01:00.506 ********** 2026-03-27 00:47:39.312659 | orchestrator | 2026-03-27 00:47:39.312664 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-27 00:47:39.312669 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.060) 0:01:00.567 ********** 2026-03-27 00:47:39.312674 | orchestrator | 2026-03-27 00:47:39.312679 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-27 00:47:39.312683 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.061) 0:01:00.628 ********** 2026-03-27 00:47:39.312686 | orchestrator | 2026-03-27 00:47:39.312689 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-27 00:47:39.312695 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.088) 0:01:00.717 ********** 2026-03-27 00:47:39.312698 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:39.312701 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:39.312730 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:39.312735 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:39.312740 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:39.312746 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:39.312751 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:39.312755 | orchestrator | 2026-03-27 00:47:39.312761 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-27 00:47:39.312771 | orchestrator | Friday 27 March 2026 00:46:55 +0000 (0:00:25.119) 0:01:25.837 ********** 2026-03-27 00:47:39.312776 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:39.312782 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:39.312787 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:39.312792 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:39.312797 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:39.312803 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:39.312808 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:39.312813 | orchestrator | 2026-03-27 00:47:39.312818 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-27 00:47:39.312824 | orchestrator | Friday 27 March 2026 00:47:27 +0000 (0:00:32.590) 0:01:58.427 ********** 2026-03-27 00:47:39.312829 | orchestrator | ok: [testbed-manager] 2026-03-27 00:47:39.312835 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:47:39.312840 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:47:39.312845 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:47:39.312850 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:47:39.312855 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:47:39.312859 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:47:39.312865 | orchestrator | 2026-03-27 00:47:39.312870 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-27 00:47:39.312875 | orchestrator | Friday 27 March 2026 00:47:29 +0000 (0:00:01.823) 0:02:00.251 ********** 2026-03-27 00:47:39.312881 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:47:39.312885 | orchestrator | changed: [testbed-manager] 2026-03-27 00:47:39.312891 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:47:39.312896 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:47:39.312901 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:47:39.312906 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:47:39.312911 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:47:39.312916 | orchestrator | 2026-03-27 00:47:39.312921 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:47:39.312928 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-27 00:47:39.312934 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-27 00:47:39.312939 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-27 00:47:39.312944 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-27 00:47:39.312950 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-27 00:47:39.312955 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-27 00:47:39.312960 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-27 00:47:39.312965 | orchestrator | 2026-03-27 00:47:39.312970 | orchestrator | 2026-03-27 00:47:39.312975 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:47:39.312984 | orchestrator | Friday 27 March 2026 00:47:38 +0000 (0:00:08.668) 0:02:08.919 ********** 2026-03-27 00:47:39.312991 | orchestrator | =============================================================================== 2026-03-27 00:47:39.312996 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.59s 2026-03-27 00:47:39.313000 | orchestrator | common : Restart fluentd container ------------------------------------- 25.12s 2026-03-27 00:47:39.313037 | orchestrator | common : Restart cron container ----------------------------------------- 8.67s 2026-03-27 00:47:39.313047 | orchestrator | common : Copying over config.json files for services -------------------- 8.16s 2026-03-27 00:47:39.313053 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.79s 2026-03-27 00:47:39.313058 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.70s 2026-03-27 00:47:39.313064 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.99s 2026-03-27 00:47:39.313068 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.90s 2026-03-27 00:47:39.313073 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.89s 2026-03-27 00:47:39.313078 | orchestrator | common : Check common containers ---------------------------------------- 3.49s 2026-03-27 00:47:39.313084 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.24s 2026-03-27 00:47:39.313089 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.01s 2026-03-27 00:47:39.313094 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.81s 2026-03-27 00:47:39.313099 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.75s 2026-03-27 00:47:39.313108 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.35s 2026-03-27 00:47:39.313156 | orchestrator | common : Creating log volume -------------------------------------------- 2.07s 2026-03-27 00:47:39.313162 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.82s 2026-03-27 00:47:39.313167 | orchestrator | common : include_tasks -------------------------------------------------- 1.55s 2026-03-27 00:47:39.313172 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.55s 2026-03-27 00:47:39.313178 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.34s 2026-03-27 00:47:39.313183 | orchestrator | 2026-03-27 00:47:39 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:39.313188 | orchestrator | 2026-03-27 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:42.369885 | orchestrator | 2026-03-27 00:47:42 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:47:42.369943 | orchestrator | 2026-03-27 00:47:42 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:42.369950 | orchestrator | 2026-03-27 00:47:42 | INFO  | Task 8b441b19-225b-4623-8630-ccb5581bdd8b is in state STARTED 2026-03-27 00:47:42.369956 | orchestrator | 2026-03-27 00:47:42 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:47:42.369961 | orchestrator | 2026-03-27 00:47:42 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:47:42.369966 | orchestrator | 2026-03-27 00:47:42 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:42.369972 | orchestrator | 2026-03-27 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:45.404040 | orchestrator | 2026-03-27 00:47:45 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:47:45.405297 | orchestrator | 2026-03-27 00:47:45 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:45.405839 | orchestrator | 2026-03-27 00:47:45 | INFO  | Task 8b441b19-225b-4623-8630-ccb5581bdd8b is in state STARTED 2026-03-27 00:47:45.406482 | orchestrator | 2026-03-27 00:47:45 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:47:45.407195 | orchestrator | 2026-03-27 00:47:45 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:47:45.407824 | orchestrator | 2026-03-27 00:47:45 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:45.407862 | orchestrator | 2026-03-27 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:48.453224 | orchestrator | 2026-03-27 00:47:48 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:47:48.453659 | orchestrator | 2026-03-27 00:47:48 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:48.454590 | orchestrator | 2026-03-27 00:47:48 | INFO  | Task 8b441b19-225b-4623-8630-ccb5581bdd8b is in state STARTED 2026-03-27 00:47:48.455113 | orchestrator | 2026-03-27 00:47:48 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:47:48.455884 | orchestrator | 2026-03-27 00:47:48 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:47:48.456705 | orchestrator | 2026-03-27 00:47:48 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:48.456735 | orchestrator | 2026-03-27 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:51.524277 | orchestrator | 2026-03-27 00:47:51 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:47:51.526253 | orchestrator | 2026-03-27 00:47:51 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:51.528161 | orchestrator | 2026-03-27 00:47:51 | INFO  | Task 8b441b19-225b-4623-8630-ccb5581bdd8b is in state STARTED 2026-03-27 00:47:51.529706 | orchestrator | 2026-03-27 00:47:51 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:47:51.531306 | orchestrator | 2026-03-27 00:47:51 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:47:51.532760 | orchestrator | 2026-03-27 00:47:51 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:51.532820 | orchestrator | 2026-03-27 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:54.568847 | orchestrator | 2026-03-27 00:47:54 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:47:54.569115 | orchestrator | 2026-03-27 00:47:54 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:54.569910 | orchestrator | 2026-03-27 00:47:54 | INFO  | Task 8b441b19-225b-4623-8630-ccb5581bdd8b is in state STARTED 2026-03-27 00:47:54.570737 | orchestrator | 2026-03-27 00:47:54 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:47:54.572882 | orchestrator | 2026-03-27 00:47:54 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:47:54.573405 | orchestrator | 2026-03-27 00:47:54 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:54.573428 | orchestrator | 2026-03-27 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:47:57.610735 | orchestrator | 2026-03-27 00:47:57 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:47:57.613044 | orchestrator | 2026-03-27 00:47:57 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:47:57.614732 | orchestrator | 2026-03-27 00:47:57 | INFO  | Task 8b441b19-225b-4623-8630-ccb5581bdd8b is in state STARTED 2026-03-27 00:47:57.616424 | orchestrator | 2026-03-27 00:47:57 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:47:57.617696 | orchestrator | 2026-03-27 00:47:57 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:47:57.619020 | orchestrator | 2026-03-27 00:47:57 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:47:57.619096 | orchestrator | 2026-03-27 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:00.652438 | orchestrator | 2026-03-27 00:48:00 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:48:00.652804 | orchestrator | 2026-03-27 00:48:00 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:00.652839 | orchestrator | 2026-03-27 00:48:00 | INFO  | Task 8b441b19-225b-4623-8630-ccb5581bdd8b is in state SUCCESS 2026-03-27 00:48:00.653739 | orchestrator | 2026-03-27 00:48:00 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:00.654209 | orchestrator | 2026-03-27 00:48:00 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:00.657142 | orchestrator | 2026-03-27 00:48:00 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:00.657718 | orchestrator | 2026-03-27 00:48:00 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:00.657764 | orchestrator | 2026-03-27 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:03.689739 | orchestrator | 2026-03-27 00:48:03 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:48:03.689798 | orchestrator | 2026-03-27 00:48:03 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:03.690615 | orchestrator | 2026-03-27 00:48:03 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:03.691409 | orchestrator | 2026-03-27 00:48:03 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:03.692319 | orchestrator | 2026-03-27 00:48:03 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:03.693228 | orchestrator | 2026-03-27 00:48:03 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:03.693264 | orchestrator | 2026-03-27 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:06.722770 | orchestrator | 2026-03-27 00:48:06 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:48:06.723416 | orchestrator | 2026-03-27 00:48:06 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:06.726812 | orchestrator | 2026-03-27 00:48:06 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:06.728608 | orchestrator | 2026-03-27 00:48:06 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:06.731086 | orchestrator | 2026-03-27 00:48:06 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:06.732296 | orchestrator | 2026-03-27 00:48:06 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:06.732349 | orchestrator | 2026-03-27 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:09.772963 | orchestrator | 2026-03-27 00:48:09 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state STARTED 2026-03-27 00:48:09.773407 | orchestrator | 2026-03-27 00:48:09 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:09.775881 | orchestrator | 2026-03-27 00:48:09 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:09.777043 | orchestrator | 2026-03-27 00:48:09 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:09.777886 | orchestrator | 2026-03-27 00:48:09 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:09.778843 | orchestrator | 2026-03-27 00:48:09 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:09.778867 | orchestrator | 2026-03-27 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:12.817554 | orchestrator | 2026-03-27 00:48:12 | INFO  | Task abdf7bd8-c329-4116-bc7e-386e8a8289ed is in state SUCCESS 2026-03-27 00:48:12.819371 | orchestrator | 2026-03-27 00:48:12.819412 | orchestrator | 2026-03-27 00:48:12.819418 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:48:12.819424 | orchestrator | 2026-03-27 00:48:12.819430 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:48:12.819435 | orchestrator | Friday 27 March 2026 00:47:43 +0000 (0:00:00.837) 0:00:00.837 ********** 2026-03-27 00:48:12.819441 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:48:12.819448 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:48:12.819451 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:48:12.819455 | orchestrator | 2026-03-27 00:48:12.819458 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:48:12.819461 | orchestrator | Friday 27 March 2026 00:47:43 +0000 (0:00:00.392) 0:00:01.230 ********** 2026-03-27 00:48:12.819465 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-27 00:48:12.819471 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-27 00:48:12.819476 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-27 00:48:12.819482 | orchestrator | 2026-03-27 00:48:12.819487 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-27 00:48:12.819492 | orchestrator | 2026-03-27 00:48:12.819497 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-27 00:48:12.819503 | orchestrator | Friday 27 March 2026 00:47:43 +0000 (0:00:00.427) 0:00:01.657 ********** 2026-03-27 00:48:12.819508 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:48:12.819514 | orchestrator | 2026-03-27 00:48:12.819519 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-27 00:48:12.819524 | orchestrator | Friday 27 March 2026 00:47:44 +0000 (0:00:00.680) 0:00:02.338 ********** 2026-03-27 00:48:12.819529 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-27 00:48:12.819534 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-27 00:48:12.819539 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-27 00:48:12.819544 | orchestrator | 2026-03-27 00:48:12.819550 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-27 00:48:12.819555 | orchestrator | Friday 27 March 2026 00:47:46 +0000 (0:00:01.846) 0:00:04.185 ********** 2026-03-27 00:48:12.819560 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-27 00:48:12.819565 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-27 00:48:12.819570 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-27 00:48:12.819575 | orchestrator | 2026-03-27 00:48:12.819581 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-27 00:48:12.819586 | orchestrator | Friday 27 March 2026 00:47:48 +0000 (0:00:01.740) 0:00:05.926 ********** 2026-03-27 00:48:12.819591 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:48:12.819596 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:48:12.819610 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:48:12.819615 | orchestrator | 2026-03-27 00:48:12.819620 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-27 00:48:12.819626 | orchestrator | Friday 27 March 2026 00:47:50 +0000 (0:00:02.451) 0:00:08.377 ********** 2026-03-27 00:48:12.819631 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:48:12.819636 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:48:12.819641 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:48:12.819646 | orchestrator | 2026-03-27 00:48:12.819651 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:48:12.819667 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:48:12.819674 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:48:12.819679 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:48:12.819684 | orchestrator | 2026-03-27 00:48:12.819689 | orchestrator | 2026-03-27 00:48:12.819694 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:48:12.819700 | orchestrator | Friday 27 March 2026 00:47:58 +0000 (0:00:07.785) 0:00:16.163 ********** 2026-03-27 00:48:12.819705 | orchestrator | =============================================================================== 2026-03-27 00:48:12.819710 | orchestrator | memcached : Restart memcached container --------------------------------- 7.79s 2026-03-27 00:48:12.819715 | orchestrator | memcached : Check memcached container ----------------------------------- 2.45s 2026-03-27 00:48:12.819720 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.85s 2026-03-27 00:48:12.819725 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.74s 2026-03-27 00:48:12.819730 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.68s 2026-03-27 00:48:12.819735 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-03-27 00:48:12.819740 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-03-27 00:48:12.819745 | orchestrator | 2026-03-27 00:48:12.819750 | orchestrator | 2026-03-27 00:48:12.819755 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:48:12.819760 | orchestrator | 2026-03-27 00:48:12.819786 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:48:12.819792 | orchestrator | Friday 27 March 2026 00:47:43 +0000 (0:00:00.489) 0:00:00.489 ********** 2026-03-27 00:48:12.819797 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:48:12.819803 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:48:12.819808 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:48:12.819814 | orchestrator | 2026-03-27 00:48:12.819819 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:48:12.819833 | orchestrator | Friday 27 March 2026 00:47:44 +0000 (0:00:00.330) 0:00:00.819 ********** 2026-03-27 00:48:12.819836 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-27 00:48:12.819839 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-27 00:48:12.819843 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-27 00:48:12.819846 | orchestrator | 2026-03-27 00:48:12.819849 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-27 00:48:12.819852 | orchestrator | 2026-03-27 00:48:12.819855 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-27 00:48:12.819858 | orchestrator | Friday 27 March 2026 00:47:44 +0000 (0:00:00.413) 0:00:01.233 ********** 2026-03-27 00:48:12.819861 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:48:12.819864 | orchestrator | 2026-03-27 00:48:12.819867 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-27 00:48:12.819870 | orchestrator | Friday 27 March 2026 00:47:44 +0000 (0:00:00.483) 0:00:01.717 ********** 2026-03-27 00:48:12.819875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819905 | orchestrator | 2026-03-27 00:48:12.819909 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-27 00:48:12.819912 | orchestrator | Friday 27 March 2026 00:47:47 +0000 (0:00:02.419) 0:00:04.137 ********** 2026-03-27 00:48:12.819915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819947 | orchestrator | 2026-03-27 00:48:12.819954 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-27 00:48:12.819957 | orchestrator | Friday 27 March 2026 00:47:49 +0000 (0:00:02.466) 0:00:06.604 ********** 2026-03-27 00:48:12.819960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.819973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.820019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.820024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.820027 | orchestrator | 2026-03-27 00:48:12.820033 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-27 00:48:12.820036 | orchestrator | Friday 27 March 2026 00:47:52 +0000 (0:00:03.098) 0:00:09.702 ********** 2026-03-27 00:48:12.820039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.820045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.820048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.820054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.820057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.820060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-27 00:48:12.820063 | orchestrator | 2026-03-27 00:48:12.820066 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-27 00:48:12.820069 | orchestrator | Friday 27 March 2026 00:47:54 +0000 (0:00:01.594) 0:00:11.296 ********** 2026-03-27 00:48:12.820073 | orchestrator | 2026-03-27 00:48:12.820076 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-27 00:48:12.820081 | orchestrator | Friday 27 March 2026 00:47:54 +0000 (0:00:00.229) 0:00:11.526 ********** 2026-03-27 00:48:12.820086 | orchestrator | 2026-03-27 00:48:12.820089 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-27 00:48:12.820092 | orchestrator | Friday 27 March 2026 00:47:54 +0000 (0:00:00.073) 0:00:11.600 ********** 2026-03-27 00:48:12.820095 | orchestrator | 2026-03-27 00:48:12.820099 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-27 00:48:12.820102 | orchestrator | Friday 27 March 2026 00:47:54 +0000 (0:00:00.092) 0:00:11.692 ********** 2026-03-27 00:48:12.820105 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:48:12.820108 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:48:12.820111 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:48:12.820114 | orchestrator | 2026-03-27 00:48:12.820117 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-27 00:48:12.820120 | orchestrator | Friday 27 March 2026 00:48:01 +0000 (0:00:06.672) 0:00:18.365 ********** 2026-03-27 00:48:12.820123 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:48:12.820126 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:48:12.820129 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:48:12.820132 | orchestrator | 2026-03-27 00:48:12.820135 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:48:12.820138 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:48:12.820141 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:48:12.820144 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:48:12.820147 | orchestrator | 2026-03-27 00:48:12.820152 | orchestrator | 2026-03-27 00:48:12.820157 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:48:12.820161 | orchestrator | Friday 27 March 2026 00:48:10 +0000 (0:00:08.462) 0:00:26.828 ********** 2026-03-27 00:48:12.820167 | orchestrator | =============================================================================== 2026-03-27 00:48:12.820172 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.46s 2026-03-27 00:48:12.820177 | orchestrator | redis : Restart redis container ----------------------------------------- 6.67s 2026-03-27 00:48:12.820182 | orchestrator | redis : Copying over redis config files --------------------------------- 3.10s 2026-03-27 00:48:12.820187 | orchestrator | redis : Copying over default config.json files -------------------------- 2.47s 2026-03-27 00:48:12.820192 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.42s 2026-03-27 00:48:12.820199 | orchestrator | redis : Check redis containers ------------------------------------------ 1.59s 2026-03-27 00:48:12.820205 | orchestrator | redis : include_tasks --------------------------------------------------- 0.48s 2026-03-27 00:48:12.820210 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-03-27 00:48:12.820215 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.40s 2026-03-27 00:48:12.820220 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-27 00:48:12.820225 | orchestrator | 2026-03-27 00:48:12 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:12.820270 | orchestrator | 2026-03-27 00:48:12 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:12.820609 | orchestrator | 2026-03-27 00:48:12 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:12.821237 | orchestrator | 2026-03-27 00:48:12 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:12.822302 | orchestrator | 2026-03-27 00:48:12 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:12.822326 | orchestrator | 2026-03-27 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:15.871490 | orchestrator | 2026-03-27 00:48:15 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:15.871541 | orchestrator | 2026-03-27 00:48:15 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:15.871546 | orchestrator | 2026-03-27 00:48:15 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:15.871551 | orchestrator | 2026-03-27 00:48:15 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:15.871554 | orchestrator | 2026-03-27 00:48:15 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:15.871559 | orchestrator | 2026-03-27 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:18.943324 | orchestrator | 2026-03-27 00:48:18 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:18.944086 | orchestrator | 2026-03-27 00:48:18 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:18.946067 | orchestrator | 2026-03-27 00:48:18 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:18.946105 | orchestrator | 2026-03-27 00:48:18 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:18.946111 | orchestrator | 2026-03-27 00:48:18 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:18.946115 | orchestrator | 2026-03-27 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:21.972408 | orchestrator | 2026-03-27 00:48:21 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:21.974688 | orchestrator | 2026-03-27 00:48:21 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:21.977689 | orchestrator | 2026-03-27 00:48:21 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:21.979615 | orchestrator | 2026-03-27 00:48:21 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:21.980798 | orchestrator | 2026-03-27 00:48:21 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:21.980837 | orchestrator | 2026-03-27 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:25.027796 | orchestrator | 2026-03-27 00:48:25 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:25.031768 | orchestrator | 2026-03-27 00:48:25 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:25.034349 | orchestrator | 2026-03-27 00:48:25 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:25.039467 | orchestrator | 2026-03-27 00:48:25 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:25.043167 | orchestrator | 2026-03-27 00:48:25 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:25.043244 | orchestrator | 2026-03-27 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:28.133770 | orchestrator | 2026-03-27 00:48:28 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:28.134288 | orchestrator | 2026-03-27 00:48:28 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:28.135168 | orchestrator | 2026-03-27 00:48:28 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:28.135862 | orchestrator | 2026-03-27 00:48:28 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:28.136904 | orchestrator | 2026-03-27 00:48:28 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:28.136942 | orchestrator | 2026-03-27 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:31.177051 | orchestrator | 2026-03-27 00:48:31 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:31.177176 | orchestrator | 2026-03-27 00:48:31 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:31.181134 | orchestrator | 2026-03-27 00:48:31 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:31.183812 | orchestrator | 2026-03-27 00:48:31 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:31.187407 | orchestrator | 2026-03-27 00:48:31 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:31.187455 | orchestrator | 2026-03-27 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:34.268830 | orchestrator | 2026-03-27 00:48:34 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:34.269774 | orchestrator | 2026-03-27 00:48:34 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:34.270398 | orchestrator | 2026-03-27 00:48:34 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:34.271158 | orchestrator | 2026-03-27 00:48:34 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:34.271995 | orchestrator | 2026-03-27 00:48:34 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:34.272042 | orchestrator | 2026-03-27 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:37.311077 | orchestrator | 2026-03-27 00:48:37 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:37.311167 | orchestrator | 2026-03-27 00:48:37 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:37.314543 | orchestrator | 2026-03-27 00:48:37 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:37.314773 | orchestrator | 2026-03-27 00:48:37 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:37.315534 | orchestrator | 2026-03-27 00:48:37 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:37.315589 | orchestrator | 2026-03-27 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:40.341801 | orchestrator | 2026-03-27 00:48:40 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:40.343279 | orchestrator | 2026-03-27 00:48:40 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:40.343793 | orchestrator | 2026-03-27 00:48:40 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:40.344769 | orchestrator | 2026-03-27 00:48:40 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:40.345509 | orchestrator | 2026-03-27 00:48:40 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:40.345573 | orchestrator | 2026-03-27 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:43.533105 | orchestrator | 2026-03-27 00:48:43 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:43.534448 | orchestrator | 2026-03-27 00:48:43 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:43.534781 | orchestrator | 2026-03-27 00:48:43 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:43.536822 | orchestrator | 2026-03-27 00:48:43 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:43.538988 | orchestrator | 2026-03-27 00:48:43 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:43.539043 | orchestrator | 2026-03-27 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:46.650811 | orchestrator | 2026-03-27 00:48:46 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:46.650903 | orchestrator | 2026-03-27 00:48:46 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:46.650911 | orchestrator | 2026-03-27 00:48:46 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:46.651028 | orchestrator | 2026-03-27 00:48:46 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state STARTED 2026-03-27 00:48:46.651660 | orchestrator | 2026-03-27 00:48:46 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:46.651686 | orchestrator | 2026-03-27 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:49.687486 | orchestrator | 2026-03-27 00:48:49 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:49.687670 | orchestrator | 2026-03-27 00:48:49 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:49.688491 | orchestrator | 2026-03-27 00:48:49 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:49.689825 | orchestrator | 2026-03-27 00:48:49 | INFO  | Task 1f6df156-1bfa-444a-b2b4-5688765ab763 is in state SUCCESS 2026-03-27 00:48:49.691475 | orchestrator | 2026-03-27 00:48:49.691526 | orchestrator | 2026-03-27 00:48:49.691532 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:48:49.691538 | orchestrator | 2026-03-27 00:48:49.691542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:48:49.691546 | orchestrator | Friday 27 March 2026 00:47:44 +0000 (0:00:00.400) 0:00:00.400 ********** 2026-03-27 00:48:49.691551 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:48:49.691556 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:48:49.691560 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:48:49.691564 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:48:49.691568 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:48:49.691571 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:48:49.691575 | orchestrator | 2026-03-27 00:48:49.691579 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:48:49.691583 | orchestrator | Friday 27 March 2026 00:47:45 +0000 (0:00:01.095) 0:00:01.496 ********** 2026-03-27 00:48:49.691587 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-27 00:48:49.691591 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-27 00:48:49.691595 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-27 00:48:49.691599 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-27 00:48:49.691603 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-27 00:48:49.691607 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-27 00:48:49.691610 | orchestrator | 2026-03-27 00:48:49.691614 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-27 00:48:49.691618 | orchestrator | 2026-03-27 00:48:49.691622 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-27 00:48:49.691626 | orchestrator | Friday 27 March 2026 00:47:47 +0000 (0:00:01.078) 0:00:02.574 ********** 2026-03-27 00:48:49.691645 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:48:49.691650 | orchestrator | 2026-03-27 00:48:49.691654 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-27 00:48:49.691658 | orchestrator | Friday 27 March 2026 00:47:48 +0000 (0:00:01.370) 0:00:03.945 ********** 2026-03-27 00:48:49.691662 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-27 00:48:49.691666 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-27 00:48:49.691669 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-27 00:48:49.691673 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-27 00:48:49.691677 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-27 00:48:49.691680 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-27 00:48:49.691684 | orchestrator | 2026-03-27 00:48:49.691688 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-27 00:48:49.691692 | orchestrator | Friday 27 March 2026 00:47:50 +0000 (0:00:01.711) 0:00:05.657 ********** 2026-03-27 00:48:49.691695 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-27 00:48:49.691699 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-27 00:48:49.691705 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-27 00:48:49.691711 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-27 00:48:49.691716 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-27 00:48:49.691721 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-27 00:48:49.691727 | orchestrator | 2026-03-27 00:48:49.691733 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-27 00:48:49.691739 | orchestrator | Friday 27 March 2026 00:47:52 +0000 (0:00:01.997) 0:00:07.654 ********** 2026-03-27 00:48:49.691745 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-27 00:48:49.691751 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:48:49.691758 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-27 00:48:49.691764 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:48:49.691767 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-27 00:48:49.691771 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:48:49.691775 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-27 00:48:49.691787 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:48:49.691792 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-27 00:48:49.691796 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:48:49.691799 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-27 00:48:49.691803 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:48:49.691806 | orchestrator | 2026-03-27 00:48:49.691810 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-27 00:48:49.691814 | orchestrator | Friday 27 March 2026 00:47:53 +0000 (0:00:01.430) 0:00:09.084 ********** 2026-03-27 00:48:49.691818 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:48:49.691821 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:48:49.691825 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:48:49.691829 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:48:49.691832 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:48:49.691836 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:48:49.691839 | orchestrator | 2026-03-27 00:48:49.691843 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-27 00:48:49.691847 | orchestrator | Friday 27 March 2026 00:47:54 +0000 (0:00:00.702) 0:00:09.787 ********** 2026-03-27 00:48:49.691866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691909 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691941 | orchestrator | 2026-03-27 00:48:49.691945 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-27 00:48:49.691974 | orchestrator | Friday 27 March 2026 00:47:55 +0000 (0:00:01.428) 0:00:11.216 ********** 2026-03-27 00:48:49.691978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.691996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692045 | orchestrator | 2026-03-27 00:48:49.692050 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-27 00:48:49.692054 | orchestrator | Friday 27 March 2026 00:47:58 +0000 (0:00:02.564) 0:00:13.781 ********** 2026-03-27 00:48:49.692059 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:48:49.692063 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:48:49.692067 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:48:49.692072 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:48:49.692076 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:48:49.692080 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:48:49.692085 | orchestrator | 2026-03-27 00:48:49.692089 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-27 00:48:49.692093 | orchestrator | Friday 27 March 2026 00:47:59 +0000 (0:00:01.018) 0:00:14.799 ********** 2026-03-27 00:48:49.692098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692115 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-27 00:48:49.692173 | orchestrator | 2026-03-27 00:48:49.692177 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-27 00:48:49.692181 | orchestrator | Friday 27 March 2026 00:48:01 +0000 (0:00:02.719) 0:00:17.519 ********** 2026-03-27 00:48:49.692185 | orchestrator | 2026-03-27 00:48:49.692190 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-27 00:48:49.692194 | orchestrator | Friday 27 March 2026 00:48:02 +0000 (0:00:00.320) 0:00:17.840 ********** 2026-03-27 00:48:49.692198 | orchestrator | 2026-03-27 00:48:49.692202 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-27 00:48:49.692207 | orchestrator | Friday 27 March 2026 00:48:02 +0000 (0:00:00.280) 0:00:18.120 ********** 2026-03-27 00:48:49.692211 | orchestrator | 2026-03-27 00:48:49.692215 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-27 00:48:49.692220 | orchestrator | Friday 27 March 2026 00:48:02 +0000 (0:00:00.136) 0:00:18.256 ********** 2026-03-27 00:48:49.692224 | orchestrator | 2026-03-27 00:48:49.692228 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-27 00:48:49.692232 | orchestrator | Friday 27 March 2026 00:48:03 +0000 (0:00:00.320) 0:00:18.576 ********** 2026-03-27 00:48:49.692237 | orchestrator | 2026-03-27 00:48:49.692241 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-27 00:48:49.692249 | orchestrator | Friday 27 March 2026 00:48:03 +0000 (0:00:00.153) 0:00:18.730 ********** 2026-03-27 00:48:49.692253 | orchestrator | 2026-03-27 00:48:49.692257 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-27 00:48:49.692261 | orchestrator | Friday 27 March 2026 00:48:03 +0000 (0:00:00.169) 0:00:18.899 ********** 2026-03-27 00:48:49.692266 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:48:49.692270 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:48:49.692275 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:48:49.692279 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:48:49.692283 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:48:49.692288 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:48:49.692292 | orchestrator | 2026-03-27 00:48:49.692296 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-27 00:48:49.692300 | orchestrator | Friday 27 March 2026 00:48:13 +0000 (0:00:09.737) 0:00:28.637 ********** 2026-03-27 00:48:49.692305 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:48:49.692309 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:48:49.692313 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:48:49.692318 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:48:49.692322 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:48:49.692326 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:48:49.692331 | orchestrator | 2026-03-27 00:48:49.692336 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-27 00:48:49.692340 | orchestrator | Friday 27 March 2026 00:48:14 +0000 (0:00:01.368) 0:00:30.006 ********** 2026-03-27 00:48:49.692347 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:48:49.692351 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:48:49.692355 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:48:49.692359 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:48:49.692364 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:48:49.692368 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:48:49.692373 | orchestrator | 2026-03-27 00:48:49.692377 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-27 00:48:49.692382 | orchestrator | Friday 27 March 2026 00:48:24 +0000 (0:00:10.482) 0:00:40.488 ********** 2026-03-27 00:48:49.692386 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-27 00:48:49.692390 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-27 00:48:49.692395 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-27 00:48:49.692399 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-27 00:48:49.692404 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-27 00:48:49.692409 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-27 00:48:49.692413 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-27 00:48:49.692417 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-27 00:48:49.692421 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-27 00:48:49.692445 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-27 00:48:49.692449 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-27 00:48:49.692453 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-27 00:48:49.692457 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-27 00:48:49.692464 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-27 00:48:49.692468 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-27 00:48:49.692472 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-27 00:48:49.692475 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-27 00:48:49.692479 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-27 00:48:49.692483 | orchestrator | 2026-03-27 00:48:49.692487 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-27 00:48:49.692491 | orchestrator | Friday 27 March 2026 00:48:33 +0000 (0:00:08.935) 0:00:49.424 ********** 2026-03-27 00:48:49.692497 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-27 00:48:49.692503 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:48:49.692509 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-27 00:48:49.692514 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:48:49.692520 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-27 00:48:49.692526 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:48:49.692532 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-27 00:48:49.692538 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-27 00:48:49.692544 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-27 00:48:49.692550 | orchestrator | 2026-03-27 00:48:49.692556 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-27 00:48:49.692561 | orchestrator | Friday 27 March 2026 00:48:36 +0000 (0:00:02.997) 0:00:52.421 ********** 2026-03-27 00:48:49.692567 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-27 00:48:49.692573 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:48:49.692578 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-27 00:48:49.692584 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:48:49.692591 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-27 00:48:49.692596 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:48:49.692603 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-27 00:48:49.692609 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-27 00:48:49.692615 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-27 00:48:49.692621 | orchestrator | 2026-03-27 00:48:49.692627 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-27 00:48:49.692633 | orchestrator | Friday 27 March 2026 00:48:41 +0000 (0:00:04.147) 0:00:56.568 ********** 2026-03-27 00:48:49.692639 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:48:49.692644 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:48:49.692653 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:48:49.692663 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:48:49.692669 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:48:49.692675 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:48:49.692681 | orchestrator | 2026-03-27 00:48:49.692687 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:48:49.692694 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 00:48:49.692701 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 00:48:49.692707 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 00:48:49.692719 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 00:48:49.692725 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 00:48:49.692736 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 00:48:49.692742 | orchestrator | 2026-03-27 00:48:49.692746 | orchestrator | 2026-03-27 00:48:49.692750 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:48:49.692754 | orchestrator | Friday 27 March 2026 00:48:48 +0000 (0:00:07.856) 0:01:04.425 ********** 2026-03-27 00:48:49.692758 | orchestrator | =============================================================================== 2026-03-27 00:48:49.692761 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.34s 2026-03-27 00:48:49.692765 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.74s 2026-03-27 00:48:49.692768 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.94s 2026-03-27 00:48:49.692772 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.15s 2026-03-27 00:48:49.692776 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.00s 2026-03-27 00:48:49.692779 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.72s 2026-03-27 00:48:49.692783 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.56s 2026-03-27 00:48:49.692786 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.00s 2026-03-27 00:48:49.692790 | orchestrator | module-load : Load modules ---------------------------------------------- 1.71s 2026-03-27 00:48:49.692794 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.43s 2026-03-27 00:48:49.692797 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.43s 2026-03-27 00:48:49.692801 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.38s 2026-03-27 00:48:49.692804 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.37s 2026-03-27 00:48:49.692808 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.37s 2026-03-27 00:48:49.692812 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2026-03-27 00:48:49.692815 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.08s 2026-03-27 00:48:49.692819 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.02s 2026-03-27 00:48:49.692823 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.70s 2026-03-27 00:48:49.692827 | orchestrator | 2026-03-27 00:48:49 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:49.692830 | orchestrator | 2026-03-27 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:52.724154 | orchestrator | 2026-03-27 00:48:52 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:52.725674 | orchestrator | 2026-03-27 00:48:52 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:48:52.728187 | orchestrator | 2026-03-27 00:48:52 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:52.728778 | orchestrator | 2026-03-27 00:48:52 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:52.729410 | orchestrator | 2026-03-27 00:48:52 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:52.730087 | orchestrator | 2026-03-27 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:55.765163 | orchestrator | 2026-03-27 00:48:55 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:55.765344 | orchestrator | 2026-03-27 00:48:55 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:48:55.765897 | orchestrator | 2026-03-27 00:48:55 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:55.766571 | orchestrator | 2026-03-27 00:48:55 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:55.767330 | orchestrator | 2026-03-27 00:48:55 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:55.767358 | orchestrator | 2026-03-27 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:48:58.932346 | orchestrator | 2026-03-27 00:48:58 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:48:58.932404 | orchestrator | 2026-03-27 00:48:58 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:48:58.932413 | orchestrator | 2026-03-27 00:48:58 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:48:58.932420 | orchestrator | 2026-03-27 00:48:58 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:48:58.932426 | orchestrator | 2026-03-27 00:48:58 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:48:58.932433 | orchestrator | 2026-03-27 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:01.872488 | orchestrator | 2026-03-27 00:49:01 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:01.874199 | orchestrator | 2026-03-27 00:49:01 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:01.875830 | orchestrator | 2026-03-27 00:49:01 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:01.877176 | orchestrator | 2026-03-27 00:49:01 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:01.878656 | orchestrator | 2026-03-27 00:49:01 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:01.878699 | orchestrator | 2026-03-27 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:04.923627 | orchestrator | 2026-03-27 00:49:04 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:04.925409 | orchestrator | 2026-03-27 00:49:04 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:04.927242 | orchestrator | 2026-03-27 00:49:04 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:04.927885 | orchestrator | 2026-03-27 00:49:04 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:04.928254 | orchestrator | 2026-03-27 00:49:04 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:04.928279 | orchestrator | 2026-03-27 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:07.971293 | orchestrator | 2026-03-27 00:49:07 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:07.976166 | orchestrator | 2026-03-27 00:49:07 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:07.980262 | orchestrator | 2026-03-27 00:49:07 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:07.982351 | orchestrator | 2026-03-27 00:49:07 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:07.983595 | orchestrator | 2026-03-27 00:49:07 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:07.983914 | orchestrator | 2026-03-27 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:11.012228 | orchestrator | 2026-03-27 00:49:11 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:11.014288 | orchestrator | 2026-03-27 00:49:11 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:11.015165 | orchestrator | 2026-03-27 00:49:11 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:11.015977 | orchestrator | 2026-03-27 00:49:11 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:11.018259 | orchestrator | 2026-03-27 00:49:11 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:11.018314 | orchestrator | 2026-03-27 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:14.048816 | orchestrator | 2026-03-27 00:49:14 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:14.049586 | orchestrator | 2026-03-27 00:49:14 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:14.050319 | orchestrator | 2026-03-27 00:49:14 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:14.051113 | orchestrator | 2026-03-27 00:49:14 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:14.053887 | orchestrator | 2026-03-27 00:49:14 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:14.053917 | orchestrator | 2026-03-27 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:17.089144 | orchestrator | 2026-03-27 00:49:17 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:17.089504 | orchestrator | 2026-03-27 00:49:17 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:17.090155 | orchestrator | 2026-03-27 00:49:17 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:17.090746 | orchestrator | 2026-03-27 00:49:17 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:17.091365 | orchestrator | 2026-03-27 00:49:17 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:17.091458 | orchestrator | 2026-03-27 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:20.206142 | orchestrator | 2026-03-27 00:49:20 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:20.206206 | orchestrator | 2026-03-27 00:49:20 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:20.206216 | orchestrator | 2026-03-27 00:49:20 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:20.207172 | orchestrator | 2026-03-27 00:49:20 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:20.209869 | orchestrator | 2026-03-27 00:49:20 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:20.209912 | orchestrator | 2026-03-27 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:23.237539 | orchestrator | 2026-03-27 00:49:23 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:23.240115 | orchestrator | 2026-03-27 00:49:23 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:23.242847 | orchestrator | 2026-03-27 00:49:23 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:23.244236 | orchestrator | 2026-03-27 00:49:23 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:23.245810 | orchestrator | 2026-03-27 00:49:23 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:23.245860 | orchestrator | 2026-03-27 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:26.282848 | orchestrator | 2026-03-27 00:49:26 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:26.283801 | orchestrator | 2026-03-27 00:49:26 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:26.286541 | orchestrator | 2026-03-27 00:49:26 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:26.287387 | orchestrator | 2026-03-27 00:49:26 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:26.288671 | orchestrator | 2026-03-27 00:49:26 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:26.289322 | orchestrator | 2026-03-27 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:29.345391 | orchestrator | 2026-03-27 00:49:29 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:29.346470 | orchestrator | 2026-03-27 00:49:29 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:29.347422 | orchestrator | 2026-03-27 00:49:29 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:29.357859 | orchestrator | 2026-03-27 00:49:29 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:29.360558 | orchestrator | 2026-03-27 00:49:29 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:29.360610 | orchestrator | 2026-03-27 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:32.411509 | orchestrator | 2026-03-27 00:49:32 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:32.414547 | orchestrator | 2026-03-27 00:49:32 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:32.414597 | orchestrator | 2026-03-27 00:49:32 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:32.414628 | orchestrator | 2026-03-27 00:49:32 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:32.415526 | orchestrator | 2026-03-27 00:49:32 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:32.415553 | orchestrator | 2026-03-27 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:35.457204 | orchestrator | 2026-03-27 00:49:35 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:35.458456 | orchestrator | 2026-03-27 00:49:35 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:35.461799 | orchestrator | 2026-03-27 00:49:35 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:35.462618 | orchestrator | 2026-03-27 00:49:35 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:35.463592 | orchestrator | 2026-03-27 00:49:35 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:35.463644 | orchestrator | 2026-03-27 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:38.519400 | orchestrator | 2026-03-27 00:49:38 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:38.523343 | orchestrator | 2026-03-27 00:49:38 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:38.526715 | orchestrator | 2026-03-27 00:49:38 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:38.529240 | orchestrator | 2026-03-27 00:49:38 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:38.530865 | orchestrator | 2026-03-27 00:49:38 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:38.531055 | orchestrator | 2026-03-27 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:41.581109 | orchestrator | 2026-03-27 00:49:41 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:41.581170 | orchestrator | 2026-03-27 00:49:41 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:41.582341 | orchestrator | 2026-03-27 00:49:41 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:41.582845 | orchestrator | 2026-03-27 00:49:41 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:41.584222 | orchestrator | 2026-03-27 00:49:41 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:41.584258 | orchestrator | 2026-03-27 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:44.611054 | orchestrator | 2026-03-27 00:49:44 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:44.611432 | orchestrator | 2026-03-27 00:49:44 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:44.612325 | orchestrator | 2026-03-27 00:49:44 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:44.612946 | orchestrator | 2026-03-27 00:49:44 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:44.613711 | orchestrator | 2026-03-27 00:49:44 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:44.613738 | orchestrator | 2026-03-27 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:47.639674 | orchestrator | 2026-03-27 00:49:47 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:47.640101 | orchestrator | 2026-03-27 00:49:47 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:47.640883 | orchestrator | 2026-03-27 00:49:47 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:47.641648 | orchestrator | 2026-03-27 00:49:47 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:47.644265 | orchestrator | 2026-03-27 00:49:47 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:47.644312 | orchestrator | 2026-03-27 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:50.681931 | orchestrator | 2026-03-27 00:49:50 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:50.683685 | orchestrator | 2026-03-27 00:49:50 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:50.686231 | orchestrator | 2026-03-27 00:49:50 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:50.688519 | orchestrator | 2026-03-27 00:49:50 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:50.690186 | orchestrator | 2026-03-27 00:49:50 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:50.690297 | orchestrator | 2026-03-27 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:53.726948 | orchestrator | 2026-03-27 00:49:53 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:53.727241 | orchestrator | 2026-03-27 00:49:53 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:53.727721 | orchestrator | 2026-03-27 00:49:53 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:53.728404 | orchestrator | 2026-03-27 00:49:53 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:53.728950 | orchestrator | 2026-03-27 00:49:53 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:53.729092 | orchestrator | 2026-03-27 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:56.761227 | orchestrator | 2026-03-27 00:49:56 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:56.761284 | orchestrator | 2026-03-27 00:49:56 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:56.761293 | orchestrator | 2026-03-27 00:49:56 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:56.761451 | orchestrator | 2026-03-27 00:49:56 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:56.762087 | orchestrator | 2026-03-27 00:49:56 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:56.762129 | orchestrator | 2026-03-27 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:49:59.793659 | orchestrator | 2026-03-27 00:49:59 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:49:59.793716 | orchestrator | 2026-03-27 00:49:59 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:49:59.793761 | orchestrator | 2026-03-27 00:49:59 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:49:59.794730 | orchestrator | 2026-03-27 00:49:59 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:49:59.795183 | orchestrator | 2026-03-27 00:49:59 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:49:59.795210 | orchestrator | 2026-03-27 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:02.823178 | orchestrator | 2026-03-27 00:50:02 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:50:02.823428 | orchestrator | 2026-03-27 00:50:02 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:02.823930 | orchestrator | 2026-03-27 00:50:02 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:02.825480 | orchestrator | 2026-03-27 00:50:02 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:50:02.825502 | orchestrator | 2026-03-27 00:50:02 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:02.825508 | orchestrator | 2026-03-27 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:05.859927 | orchestrator | 2026-03-27 00:50:05 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:50:05.865593 | orchestrator | 2026-03-27 00:50:05 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:05.869257 | orchestrator | 2026-03-27 00:50:05 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:05.870369 | orchestrator | 2026-03-27 00:50:05 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:50:05.870919 | orchestrator | 2026-03-27 00:50:05 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:05.871742 | orchestrator | 2026-03-27 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:09.003832 | orchestrator | 2026-03-27 00:50:08 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:50:09.003937 | orchestrator | 2026-03-27 00:50:08 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:09.003944 | orchestrator | 2026-03-27 00:50:08 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:09.003948 | orchestrator | 2026-03-27 00:50:08 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state STARTED 2026-03-27 00:50:09.003952 | orchestrator | 2026-03-27 00:50:08 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:09.003956 | orchestrator | 2026-03-27 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:12.037902 | orchestrator | 2026-03-27 00:50:12 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state STARTED 2026-03-27 00:50:12.038148 | orchestrator | 2026-03-27 00:50:12 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:12.038491 | orchestrator | 2026-03-27 00:50:12 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:12.039185 | orchestrator | 2026-03-27 00:50:12 | INFO  | Task 25a26601-d688-46cc-adb0-878290e6ddf6 is in state SUCCESS 2026-03-27 00:50:12.040430 | orchestrator | 2026-03-27 00:50:12.040527 | orchestrator | 2026-03-27 00:50:12.040540 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-27 00:50:12.040548 | orchestrator | 2026-03-27 00:50:12.040555 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-27 00:50:12.040562 | orchestrator | Friday 27 March 2026 00:48:03 +0000 (0:00:00.105) 0:00:00.105 ********** 2026-03-27 00:50:12.040569 | orchestrator | ok: [localhost] => { 2026-03-27 00:50:12.040578 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-27 00:50:12.040584 | orchestrator | } 2026-03-27 00:50:12.040590 | orchestrator | 2026-03-27 00:50:12.040596 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-27 00:50:12.040602 | orchestrator | Friday 27 March 2026 00:48:03 +0000 (0:00:00.036) 0:00:00.142 ********** 2026-03-27 00:50:12.040610 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-27 00:50:12.040618 | orchestrator | ...ignoring 2026-03-27 00:50:12.040624 | orchestrator | 2026-03-27 00:50:12.040630 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-27 00:50:12.040637 | orchestrator | Friday 27 March 2026 00:48:06 +0000 (0:00:03.434) 0:00:03.576 ********** 2026-03-27 00:50:12.040643 | orchestrator | skipping: [localhost] 2026-03-27 00:50:12.040649 | orchestrator | 2026-03-27 00:50:12.040655 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-27 00:50:12.040662 | orchestrator | Friday 27 March 2026 00:48:06 +0000 (0:00:00.073) 0:00:03.649 ********** 2026-03-27 00:50:12.040669 | orchestrator | ok: [localhost] 2026-03-27 00:50:12.040677 | orchestrator | 2026-03-27 00:50:12.040687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:50:12.040693 | orchestrator | 2026-03-27 00:50:12.040699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:50:12.040705 | orchestrator | Friday 27 March 2026 00:48:06 +0000 (0:00:00.196) 0:00:03.846 ********** 2026-03-27 00:50:12.040740 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:12.040747 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:12.040753 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:12.040760 | orchestrator | 2026-03-27 00:50:12.040765 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:50:12.040772 | orchestrator | Friday 27 March 2026 00:48:07 +0000 (0:00:00.293) 0:00:04.139 ********** 2026-03-27 00:50:12.040778 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-27 00:50:12.040784 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-27 00:50:12.040790 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-27 00:50:12.040796 | orchestrator | 2026-03-27 00:50:12.040803 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-27 00:50:12.040809 | orchestrator | 2026-03-27 00:50:12.040815 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-27 00:50:12.040822 | orchestrator | Friday 27 March 2026 00:48:07 +0000 (0:00:00.351) 0:00:04.491 ********** 2026-03-27 00:50:12.040828 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:50:12.040836 | orchestrator | 2026-03-27 00:50:12.040842 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-27 00:50:12.040846 | orchestrator | Friday 27 March 2026 00:48:08 +0000 (0:00:00.579) 0:00:05.070 ********** 2026-03-27 00:50:12.040850 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:12.040877 | orchestrator | 2026-03-27 00:50:12.040884 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-27 00:50:12.040890 | orchestrator | Friday 27 March 2026 00:48:09 +0000 (0:00:01.265) 0:00:06.335 ********** 2026-03-27 00:50:12.040896 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:12.040904 | orchestrator | 2026-03-27 00:50:12.040910 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-27 00:50:12.040917 | orchestrator | Friday 27 March 2026 00:48:09 +0000 (0:00:00.329) 0:00:06.665 ********** 2026-03-27 00:50:12.040924 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:12.040931 | orchestrator | 2026-03-27 00:50:12.040938 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-27 00:50:12.040955 | orchestrator | Friday 27 March 2026 00:48:10 +0000 (0:00:00.356) 0:00:07.021 ********** 2026-03-27 00:50:12.040959 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:12.040963 | orchestrator | 2026-03-27 00:50:12.040967 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-27 00:50:12.040971 | orchestrator | Friday 27 March 2026 00:48:10 +0000 (0:00:00.348) 0:00:07.370 ********** 2026-03-27 00:50:12.040975 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:12.040979 | orchestrator | 2026-03-27 00:50:12.040983 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-27 00:50:12.040988 | orchestrator | Friday 27 March 2026 00:48:10 +0000 (0:00:00.351) 0:00:07.721 ********** 2026-03-27 00:50:12.040992 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:50:12.040997 | orchestrator | 2026-03-27 00:50:12.041001 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-27 00:50:12.041005 | orchestrator | Friday 27 March 2026 00:48:11 +0000 (0:00:00.615) 0:00:08.337 ********** 2026-03-27 00:50:12.041010 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:12.041015 | orchestrator | 2026-03-27 00:50:12.041019 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-27 00:50:12.041024 | orchestrator | Friday 27 March 2026 00:48:12 +0000 (0:00:00.635) 0:00:08.972 ********** 2026-03-27 00:50:12.041028 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:12.041033 | orchestrator | 2026-03-27 00:50:12.041037 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-27 00:50:12.041042 | orchestrator | Friday 27 March 2026 00:48:12 +0000 (0:00:00.522) 0:00:09.495 ********** 2026-03-27 00:50:12.041056 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:12.041066 | orchestrator | 2026-03-27 00:50:12.041089 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-27 00:50:12.041096 | orchestrator | Friday 27 March 2026 00:48:12 +0000 (0:00:00.274) 0:00:09.770 ********** 2026-03-27 00:50:12.041108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:50:12.041120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:50:12.041131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:50:12.041139 | orchestrator | 2026-03-27 00:50:12.041145 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-27 00:50:12.041151 | orchestrator | Friday 27 March 2026 00:48:14 +0000 (0:00:01.257) 0:00:11.027 ********** 2026-03-27 00:50:12.041166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:50:12.041180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:50:12.041186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:50:12.041191 | orchestrator | 2026-03-27 00:50:12.041195 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-27 00:50:12.041200 | orchestrator | Friday 27 March 2026 00:48:15 +0000 (0:00:01.698) 0:00:12.726 ********** 2026-03-27 00:50:12.041204 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-27 00:50:12.041210 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-27 00:50:12.041217 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-27 00:50:12.041221 | orchestrator | 2026-03-27 00:50:12.041226 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-27 00:50:12.041231 | orchestrator | Friday 27 March 2026 00:48:18 +0000 (0:00:02.693) 0:00:15.419 ********** 2026-03-27 00:50:12.041235 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-27 00:50:12.041239 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-27 00:50:12.041248 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-27 00:50:12.041252 | orchestrator | 2026-03-27 00:50:12.041256 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-27 00:50:12.041261 | orchestrator | Friday 27 March 2026 00:48:20 +0000 (0:00:02.302) 0:00:17.721 ********** 2026-03-27 00:50:12.041265 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-27 00:50:12.041270 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-27 00:50:12.041275 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-27 00:50:12.041279 | orchestrator | 2026-03-27 00:50:12.041284 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-27 00:50:12.041288 | orchestrator | Friday 27 March 2026 00:48:22 +0000 (0:00:01.377) 0:00:19.099 ********** 2026-03-27 00:50:12.041296 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-27 00:50:12.041301 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-27 00:50:12.041305 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-27 00:50:12.041310 | orchestrator | 2026-03-27 00:50:12.041314 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-27 00:50:12.041318 | orchestrator | Friday 27 March 2026 00:48:23 +0000 (0:00:01.525) 0:00:20.624 ********** 2026-03-27 00:50:12.041323 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-27 00:50:12.041327 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-27 00:50:12.041332 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-27 00:50:12.041337 | orchestrator | 2026-03-27 00:50:12.041344 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-27 00:50:12.041351 | orchestrator | Friday 27 March 2026 00:48:25 +0000 (0:00:01.645) 0:00:22.270 ********** 2026-03-27 00:50:12.041357 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-27 00:50:12.041363 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-27 00:50:12.041369 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-27 00:50:12.041376 | orchestrator | 2026-03-27 00:50:12.041382 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-27 00:50:12.041388 | orchestrator | Friday 27 March 2026 00:48:28 +0000 (0:00:02.843) 0:00:25.113 ********** 2026-03-27 00:50:12.041394 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:12.041400 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:12.041407 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:12.041412 | orchestrator | 2026-03-27 00:50:12.041419 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-27 00:50:12.041424 | orchestrator | Friday 27 March 2026 00:48:28 +0000 (0:00:00.569) 0:00:25.683 ********** 2026-03-27 00:50:12.041431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:50:12.041446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:50:12.041961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:50:12.042004 | orchestrator | 2026-03-27 00:50:12.042011 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-27 00:50:12.042093 | orchestrator | Friday 27 March 2026 00:48:30 +0000 (0:00:01.672) 0:00:27.355 ********** 2026-03-27 00:50:12.042100 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:12.042108 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:12.042115 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:12.042121 | orchestrator | 2026-03-27 00:50:12.042128 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-27 00:50:12.042134 | orchestrator | Friday 27 March 2026 00:48:31 +0000 (0:00:01.062) 0:00:28.418 ********** 2026-03-27 00:50:12.042141 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:12.042147 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:12.042151 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:12.042155 | orchestrator | 2026-03-27 00:50:12.042159 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-27 00:50:12.042162 | orchestrator | Friday 27 March 2026 00:48:38 +0000 (0:00:06.629) 0:00:35.047 ********** 2026-03-27 00:50:12.042166 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:12.042170 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:12.042174 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:12.042178 | orchestrator | 2026-03-27 00:50:12.042181 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-27 00:50:12.042185 | orchestrator | 2026-03-27 00:50:12.042198 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-27 00:50:12.042202 | orchestrator | Friday 27 March 2026 00:48:38 +0000 (0:00:00.320) 0:00:35.368 ********** 2026-03-27 00:50:12.042206 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:12.042211 | orchestrator | 2026-03-27 00:50:12.042215 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-27 00:50:12.042218 | orchestrator | Friday 27 March 2026 00:48:39 +0000 (0:00:00.650) 0:00:36.018 ********** 2026-03-27 00:50:12.042222 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:12.042226 | orchestrator | 2026-03-27 00:50:12.042229 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-27 00:50:12.042233 | orchestrator | Friday 27 March 2026 00:48:39 +0000 (0:00:00.192) 0:00:36.210 ********** 2026-03-27 00:50:12.042237 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:12.042240 | orchestrator | 2026-03-27 00:50:12.042245 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-27 00:50:12.042248 | orchestrator | Friday 27 March 2026 00:48:46 +0000 (0:00:06.886) 0:00:43.097 ********** 2026-03-27 00:50:12.042252 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:12.042256 | orchestrator | 2026-03-27 00:50:12.042260 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-27 00:50:12.042263 | orchestrator | 2026-03-27 00:50:12.042267 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-27 00:50:12.042275 | orchestrator | Friday 27 March 2026 00:49:36 +0000 (0:00:50.503) 0:01:33.601 ********** 2026-03-27 00:50:12.042279 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:12.042283 | orchestrator | 2026-03-27 00:50:12.042286 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-27 00:50:12.042290 | orchestrator | Friday 27 March 2026 00:49:37 +0000 (0:00:00.699) 0:01:34.300 ********** 2026-03-27 00:50:12.042294 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:12.042298 | orchestrator | 2026-03-27 00:50:12.042302 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-27 00:50:12.042308 | orchestrator | Friday 27 March 2026 00:49:37 +0000 (0:00:00.246) 0:01:34.547 ********** 2026-03-27 00:50:12.042314 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:12.042319 | orchestrator | 2026-03-27 00:50:12.042324 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-27 00:50:12.042330 | orchestrator | Friday 27 March 2026 00:49:44 +0000 (0:00:06.604) 0:01:41.152 ********** 2026-03-27 00:50:12.042337 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:12.042343 | orchestrator | 2026-03-27 00:50:12.042348 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-27 00:50:12.042354 | orchestrator | 2026-03-27 00:50:12.042360 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-27 00:50:12.042366 | orchestrator | Friday 27 March 2026 00:49:51 +0000 (0:00:07.548) 0:01:48.700 ********** 2026-03-27 00:50:12.042371 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:12.042377 | orchestrator | 2026-03-27 00:50:12.042383 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-27 00:50:12.042389 | orchestrator | Friday 27 March 2026 00:49:52 +0000 (0:00:00.557) 0:01:49.258 ********** 2026-03-27 00:50:12.042394 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:12.042400 | orchestrator | 2026-03-27 00:50:12.042406 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-27 00:50:12.042411 | orchestrator | Friday 27 March 2026 00:49:52 +0000 (0:00:00.276) 0:01:49.534 ********** 2026-03-27 00:50:12.042418 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:12.042432 | orchestrator | 2026-03-27 00:50:12.042440 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-27 00:50:12.042459 | orchestrator | Friday 27 March 2026 00:49:53 +0000 (0:00:01.308) 0:01:50.843 ********** 2026-03-27 00:50:12.042465 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:12.042469 | orchestrator | 2026-03-27 00:50:12.042473 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-27 00:50:12.042482 | orchestrator | 2026-03-27 00:50:12.042486 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-27 00:50:12.042490 | orchestrator | Friday 27 March 2026 00:50:05 +0000 (0:00:11.339) 0:02:02.185 ********** 2026-03-27 00:50:12.042493 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:50:12.042497 | orchestrator | 2026-03-27 00:50:12.042533 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-27 00:50:12.042538 | orchestrator | Friday 27 March 2026 00:50:07 +0000 (0:00:01.921) 0:02:04.107 ********** 2026-03-27 00:50:12.042542 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:12.042546 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:12.042552 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:12.042559 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-27 00:50:12.042568 | orchestrator | enable_outward_rabbitmq_True 2026-03-27 00:50:12.042577 | orchestrator | 2026-03-27 00:50:12.042583 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-27 00:50:12.042589 | orchestrator | skipping: no hosts matched 2026-03-27 00:50:12.042595 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-27 00:50:12.042602 | orchestrator | outward_rabbitmq_restart 2026-03-27 00:50:12.042608 | orchestrator | 2026-03-27 00:50:12.042614 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-27 00:50:12.042621 | orchestrator | skipping: no hosts matched 2026-03-27 00:50:12.042627 | orchestrator | 2026-03-27 00:50:12.042634 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-27 00:50:12.042640 | orchestrator | skipping: no hosts matched 2026-03-27 00:50:12.042647 | orchestrator | 2026-03-27 00:50:12.042655 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:50:12.042659 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-27 00:50:12.042665 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-27 00:50:12.042669 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:50:12.042673 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:50:12.042676 | orchestrator | 2026-03-27 00:50:12.042680 | orchestrator | 2026-03-27 00:50:12.042684 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:50:12.042687 | orchestrator | Friday 27 March 2026 00:50:09 +0000 (0:00:02.673) 0:02:06.781 ********** 2026-03-27 00:50:12.042691 | orchestrator | =============================================================================== 2026-03-27 00:50:12.042695 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 69.40s 2026-03-27 00:50:12.042699 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.80s 2026-03-27 00:50:12.042702 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.63s 2026-03-27 00:50:12.042706 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.43s 2026-03-27 00:50:12.042709 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.84s 2026-03-27 00:50:12.042717 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.69s 2026-03-27 00:50:12.042721 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.68s 2026-03-27 00:50:12.042725 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.30s 2026-03-27 00:50:12.042729 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.92s 2026-03-27 00:50:12.042739 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.91s 2026-03-27 00:50:12.042742 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.70s 2026-03-27 00:50:12.042746 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.67s 2026-03-27 00:50:12.042750 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.65s 2026-03-27 00:50:12.042754 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.53s 2026-03-27 00:50:12.042757 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.38s 2026-03-27 00:50:12.042761 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.27s 2026-03-27 00:50:12.042765 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.26s 2026-03-27 00:50:12.042768 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.06s 2026-03-27 00:50:12.042772 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.71s 2026-03-27 00:50:12.042776 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.64s 2026-03-27 00:50:12.042780 | orchestrator | 2026-03-27 00:50:12 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:12.042784 | orchestrator | 2026-03-27 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:15.100269 | orchestrator | 2026-03-27 00:50:15 | INFO  | Task e4885535-b5d3-4c2f-b375-6ff7a9617bcb is in state STARTED 2026-03-27 00:50:15.102190 | orchestrator | 2026-03-27 00:50:15 | INFO  | Task a2da78f9-1aff-4b8c-86fc-3af9221a67d9 is in state SUCCESS 2026-03-27 00:50:15.103395 | orchestrator | 2026-03-27 00:50:15.103436 | orchestrator | 2026-03-27 00:50:15.103444 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-27 00:50:15.103451 | orchestrator | 2026-03-27 00:50:15.103458 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-27 00:50:15.103465 | orchestrator | Friday 27 March 2026 00:45:30 +0000 (0:00:00.329) 0:00:00.329 ********** 2026-03-27 00:50:15.103471 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:50:15.103478 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:50:15.103484 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:50:15.103490 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.103496 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.103502 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.103508 | orchestrator | 2026-03-27 00:50:15.103514 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-27 00:50:15.103521 | orchestrator | Friday 27 March 2026 00:45:30 +0000 (0:00:00.686) 0:00:01.016 ********** 2026-03-27 00:50:15.103527 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.103533 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.103539 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.103546 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.103552 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.103558 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.103564 | orchestrator | 2026-03-27 00:50:15.103570 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-27 00:50:15.103576 | orchestrator | Friday 27 March 2026 00:45:31 +0000 (0:00:00.804) 0:00:01.820 ********** 2026-03-27 00:50:15.103583 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.103588 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.103594 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.103601 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.103607 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.103612 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.103619 | orchestrator | 2026-03-27 00:50:15.103625 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-27 00:50:15.103654 | orchestrator | Friday 27 March 2026 00:45:32 +0000 (0:00:00.525) 0:00:02.345 ********** 2026-03-27 00:50:15.103676 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:50:15.103682 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:50:15.103688 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.103694 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:50:15.103701 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.103707 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.103713 | orchestrator | 2026-03-27 00:50:15.103719 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-27 00:50:15.103725 | orchestrator | Friday 27 March 2026 00:45:34 +0000 (0:00:02.596) 0:00:04.942 ********** 2026-03-27 00:50:15.103731 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:50:15.103737 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:50:15.103743 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:50:15.103749 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.103755 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.103761 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.103767 | orchestrator | 2026-03-27 00:50:15.103774 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-27 00:50:15.103780 | orchestrator | Friday 27 March 2026 00:45:36 +0000 (0:00:01.940) 0:00:06.882 ********** 2026-03-27 00:50:15.103786 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:50:15.103792 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:50:15.103798 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:50:15.103804 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.103810 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.103824 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.103830 | orchestrator | 2026-03-27 00:50:15.103836 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-27 00:50:15.103876 | orchestrator | Friday 27 March 2026 00:45:38 +0000 (0:00:01.974) 0:00:08.856 ********** 2026-03-27 00:50:15.103883 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.103895 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.103901 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.103908 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.103914 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.103920 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.103930 | orchestrator | 2026-03-27 00:50:15.103937 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-27 00:50:15.103943 | orchestrator | Friday 27 March 2026 00:45:39 +0000 (0:00:01.033) 0:00:09.889 ********** 2026-03-27 00:50:15.103949 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.103955 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.103961 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.103968 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.103973 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.103979 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.103986 | orchestrator | 2026-03-27 00:50:15.103992 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-27 00:50:15.103997 | orchestrator | Friday 27 March 2026 00:45:40 +0000 (0:00:00.688) 0:00:10.578 ********** 2026-03-27 00:50:15.104004 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-27 00:50:15.104010 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-27 00:50:15.104016 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.104022 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-27 00:50:15.104028 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-27 00:50:15.104035 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.104041 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-27 00:50:15.104047 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-27 00:50:15.104053 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.104064 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-27 00:50:15.104081 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-27 00:50:15.104088 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104094 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-27 00:50:15.104100 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-27 00:50:15.104107 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104113 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-27 00:50:15.104119 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-27 00:50:15.104125 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104131 | orchestrator | 2026-03-27 00:50:15.104137 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-27 00:50:15.104143 | orchestrator | Friday 27 March 2026 00:45:41 +0000 (0:00:01.362) 0:00:11.940 ********** 2026-03-27 00:50:15.104149 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.104156 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.104162 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104168 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.104174 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104180 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104186 | orchestrator | 2026-03-27 00:50:15.104192 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-27 00:50:15.104199 | orchestrator | Friday 27 March 2026 00:45:44 +0000 (0:00:02.121) 0:00:14.061 ********** 2026-03-27 00:50:15.104205 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:50:15.104212 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:50:15.104217 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:50:15.104223 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.104229 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.104236 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.104242 | orchestrator | 2026-03-27 00:50:15.104248 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-27 00:50:15.104254 | orchestrator | Friday 27 March 2026 00:45:45 +0000 (0:00:01.265) 0:00:15.327 ********** 2026-03-27 00:50:15.104260 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:50:15.104267 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:50:15.104273 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:50:15.104279 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.104285 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.104291 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.104297 | orchestrator | 2026-03-27 00:50:15.104303 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-27 00:50:15.104309 | orchestrator | Friday 27 March 2026 00:45:51 +0000 (0:00:06.270) 0:00:21.598 ********** 2026-03-27 00:50:15.104315 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.104321 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.104328 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104334 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104340 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.104346 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104352 | orchestrator | 2026-03-27 00:50:15.104360 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-27 00:50:15.104366 | orchestrator | Friday 27 March 2026 00:45:53 +0000 (0:00:01.610) 0:00:23.208 ********** 2026-03-27 00:50:15.104372 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.104378 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.104384 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.104391 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104397 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104409 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104414 | orchestrator | 2026-03-27 00:50:15.104423 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-27 00:50:15.104431 | orchestrator | Friday 27 March 2026 00:45:55 +0000 (0:00:01.961) 0:00:25.170 ********** 2026-03-27 00:50:15.104438 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.104442 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.104445 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.104449 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104453 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104456 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104460 | orchestrator | 2026-03-27 00:50:15.104464 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-27 00:50:15.104467 | orchestrator | Friday 27 March 2026 00:45:56 +0000 (0:00:01.298) 0:00:26.468 ********** 2026-03-27 00:50:15.104471 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-27 00:50:15.104475 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-27 00:50:15.104479 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.104483 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-27 00:50:15.104487 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-27 00:50:15.104491 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.104494 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-27 00:50:15.104498 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-27 00:50:15.104502 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.104505 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-27 00:50:15.104509 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-27 00:50:15.104513 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104517 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-27 00:50:15.104520 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-27 00:50:15.104524 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104528 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-27 00:50:15.104531 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-27 00:50:15.104535 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104539 | orchestrator | 2026-03-27 00:50:15.104543 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-27 00:50:15.104550 | orchestrator | Friday 27 March 2026 00:45:57 +0000 (0:00:00.885) 0:00:27.354 ********** 2026-03-27 00:50:15.104554 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.104557 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.104561 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.104565 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104568 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104572 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104576 | orchestrator | 2026-03-27 00:50:15.104580 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-27 00:50:15.104583 | orchestrator | Friday 27 March 2026 00:45:58 +0000 (0:00:00.956) 0:00:28.310 ********** 2026-03-27 00:50:15.104587 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.104591 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.104594 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.104598 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104602 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104605 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104609 | orchestrator | 2026-03-27 00:50:15.104613 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-27 00:50:15.104617 | orchestrator | 2026-03-27 00:50:15.104620 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-27 00:50:15.104627 | orchestrator | Friday 27 March 2026 00:45:59 +0000 (0:00:01.524) 0:00:29.834 ********** 2026-03-27 00:50:15.104631 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.104635 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.104638 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.104642 | orchestrator | 2026-03-27 00:50:15.104646 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-27 00:50:15.104649 | orchestrator | Friday 27 March 2026 00:46:01 +0000 (0:00:01.326) 0:00:31.161 ********** 2026-03-27 00:50:15.104653 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.104657 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.104660 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.104664 | orchestrator | 2026-03-27 00:50:15.104668 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-27 00:50:15.104672 | orchestrator | Friday 27 March 2026 00:46:02 +0000 (0:00:01.597) 0:00:32.758 ********** 2026-03-27 00:50:15.104675 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.104679 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.104683 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.104686 | orchestrator | 2026-03-27 00:50:15.104690 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-27 00:50:15.104694 | orchestrator | Friday 27 March 2026 00:46:04 +0000 (0:00:01.302) 0:00:34.060 ********** 2026-03-27 00:50:15.104697 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.104701 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.104705 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.104708 | orchestrator | 2026-03-27 00:50:15.104712 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-27 00:50:15.104716 | orchestrator | Friday 27 March 2026 00:46:05 +0000 (0:00:01.907) 0:00:35.968 ********** 2026-03-27 00:50:15.104719 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104723 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104727 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104731 | orchestrator | 2026-03-27 00:50:15.104734 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-27 00:50:15.104738 | orchestrator | Friday 27 March 2026 00:46:06 +0000 (0:00:00.478) 0:00:36.446 ********** 2026-03-27 00:50:15.104742 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.104745 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.104749 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.104753 | orchestrator | 2026-03-27 00:50:15.104756 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-27 00:50:15.104762 | orchestrator | Friday 27 March 2026 00:46:07 +0000 (0:00:01.240) 0:00:37.687 ********** 2026-03-27 00:50:15.104766 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.104770 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.104773 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.104777 | orchestrator | 2026-03-27 00:50:15.104781 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-27 00:50:15.104784 | orchestrator | Friday 27 March 2026 00:46:09 +0000 (0:00:01.703) 0:00:39.391 ********** 2026-03-27 00:50:15.104788 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:50:15.104792 | orchestrator | 2026-03-27 00:50:15.104796 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-27 00:50:15.104799 | orchestrator | Friday 27 March 2026 00:46:10 +0000 (0:00:00.864) 0:00:40.256 ********** 2026-03-27 00:50:15.104803 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.104807 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.104810 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.104814 | orchestrator | 2026-03-27 00:50:15.104818 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-27 00:50:15.104822 | orchestrator | Friday 27 March 2026 00:46:12 +0000 (0:00:02.545) 0:00:42.801 ********** 2026-03-27 00:50:15.104825 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104832 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104836 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.104839 | orchestrator | 2026-03-27 00:50:15.104843 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-27 00:50:15.104847 | orchestrator | Friday 27 March 2026 00:46:13 +0000 (0:00:00.996) 0:00:43.798 ********** 2026-03-27 00:50:15.104879 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104883 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.104887 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104891 | orchestrator | 2026-03-27 00:50:15.104894 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-27 00:50:15.104898 | orchestrator | Friday 27 March 2026 00:46:15 +0000 (0:00:01.559) 0:00:45.357 ********** 2026-03-27 00:50:15.104902 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104906 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104909 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.104913 | orchestrator | 2026-03-27 00:50:15.104917 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-27 00:50:15.104923 | orchestrator | Friday 27 March 2026 00:46:16 +0000 (0:00:01.292) 0:00:46.650 ********** 2026-03-27 00:50:15.104927 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104931 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104935 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104939 | orchestrator | 2026-03-27 00:50:15.104942 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-27 00:50:15.104946 | orchestrator | Friday 27 March 2026 00:46:17 +0000 (0:00:00.528) 0:00:47.178 ********** 2026-03-27 00:50:15.104950 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.104954 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.104957 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.104961 | orchestrator | 2026-03-27 00:50:15.104965 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-27 00:50:15.104969 | orchestrator | Friday 27 March 2026 00:46:17 +0000 (0:00:00.565) 0:00:47.744 ********** 2026-03-27 00:50:15.104972 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.104976 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.104980 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.104984 | orchestrator | 2026-03-27 00:50:15.104987 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-27 00:50:15.104991 | orchestrator | Friday 27 March 2026 00:46:19 +0000 (0:00:01.499) 0:00:49.243 ********** 2026-03-27 00:50:15.104995 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.104999 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.105003 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.105006 | orchestrator | 2026-03-27 00:50:15.105010 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-27 00:50:15.105014 | orchestrator | Friday 27 March 2026 00:46:21 +0000 (0:00:02.619) 0:00:51.863 ********** 2026-03-27 00:50:15.105018 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.105021 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.105025 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.105029 | orchestrator | 2026-03-27 00:50:15.105033 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-27 00:50:15.105037 | orchestrator | Friday 27 March 2026 00:46:22 +0000 (0:00:00.781) 0:00:52.644 ********** 2026-03-27 00:50:15.105040 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-27 00:50:15.105045 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-27 00:50:15.105048 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-27 00:50:15.105052 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-27 00:50:15.105060 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-27 00:50:15.105063 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-27 00:50:15.105067 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-27 00:50:15.105073 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-27 00:50:15.105077 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-27 00:50:15.105081 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-27 00:50:15.105085 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-27 00:50:15.105089 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-27 00:50:15.105092 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-27 00:50:15.105096 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-27 00:50:15.105100 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-27 00:50:15.105104 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.105108 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.105111 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.105115 | orchestrator | 2026-03-27 00:50:15.105119 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-27 00:50:15.105123 | orchestrator | Friday 27 March 2026 00:47:16 +0000 (0:00:53.976) 0:01:46.620 ********** 2026-03-27 00:50:15.105127 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.105130 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.105134 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.105138 | orchestrator | 2026-03-27 00:50:15.105142 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-27 00:50:15.105147 | orchestrator | Friday 27 March 2026 00:47:17 +0000 (0:00:00.466) 0:01:47.087 ********** 2026-03-27 00:50:15.105151 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.105155 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.105159 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.105163 | orchestrator | 2026-03-27 00:50:15.105166 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-27 00:50:15.105170 | orchestrator | Friday 27 March 2026 00:47:17 +0000 (0:00:00.946) 0:01:48.034 ********** 2026-03-27 00:50:15.105174 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.105178 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.105182 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.105185 | orchestrator | 2026-03-27 00:50:15.105189 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-27 00:50:15.105193 | orchestrator | Friday 27 March 2026 00:47:19 +0000 (0:00:01.225) 0:01:49.259 ********** 2026-03-27 00:50:15.105196 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.105200 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.105204 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.105208 | orchestrator | 2026-03-27 00:50:15.105211 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-27 00:50:15.105218 | orchestrator | Friday 27 March 2026 00:47:45 +0000 (0:00:25.920) 0:02:15.180 ********** 2026-03-27 00:50:15.105222 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.105225 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.105229 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.105233 | orchestrator | 2026-03-27 00:50:15.105237 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-27 00:50:15.105240 | orchestrator | Friday 27 March 2026 00:47:45 +0000 (0:00:00.607) 0:02:15.787 ********** 2026-03-27 00:50:15.105244 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.105248 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.105255 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.105261 | orchestrator | 2026-03-27 00:50:15.105267 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-27 00:50:15.105273 | orchestrator | Friday 27 March 2026 00:47:46 +0000 (0:00:00.855) 0:02:16.643 ********** 2026-03-27 00:50:15.105279 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.105285 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.105291 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.105297 | orchestrator | 2026-03-27 00:50:15.105303 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-27 00:50:15.105309 | orchestrator | Friday 27 March 2026 00:47:47 +0000 (0:00:00.703) 0:02:17.346 ********** 2026-03-27 00:50:15.105315 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.105322 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.105328 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.105335 | orchestrator | 2026-03-27 00:50:15.105341 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-27 00:50:15.105347 | orchestrator | Friday 27 March 2026 00:47:47 +0000 (0:00:00.672) 0:02:18.018 ********** 2026-03-27 00:50:15.105353 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.105359 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.105365 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.105371 | orchestrator | 2026-03-27 00:50:15.105378 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-27 00:50:15.105383 | orchestrator | Friday 27 March 2026 00:47:48 +0000 (0:00:00.383) 0:02:18.402 ********** 2026-03-27 00:50:15.105389 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.105396 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.105402 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.105408 | orchestrator | 2026-03-27 00:50:15.105414 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-27 00:50:15.105420 | orchestrator | Friday 27 March 2026 00:47:49 +0000 (0:00:00.887) 0:02:19.289 ********** 2026-03-27 00:50:15.105426 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.105435 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.105441 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.105447 | orchestrator | 2026-03-27 00:50:15.105453 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-27 00:50:15.105460 | orchestrator | Friday 27 March 2026 00:47:49 +0000 (0:00:00.602) 0:02:19.891 ********** 2026-03-27 00:50:15.105466 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.105471 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.105478 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.105484 | orchestrator | 2026-03-27 00:50:15.105490 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-27 00:50:15.105496 | orchestrator | Friday 27 March 2026 00:47:50 +0000 (0:00:00.784) 0:02:20.676 ********** 2026-03-27 00:50:15.105502 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:50:15.105508 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:50:15.105514 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:50:15.105520 | orchestrator | 2026-03-27 00:50:15.105526 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-27 00:50:15.105532 | orchestrator | Friday 27 March 2026 00:47:51 +0000 (0:00:00.819) 0:02:21.495 ********** 2026-03-27 00:50:15.105541 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.105548 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.105554 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.105559 | orchestrator | 2026-03-27 00:50:15.105566 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-27 00:50:15.105572 | orchestrator | Friday 27 March 2026 00:47:51 +0000 (0:00:00.503) 0:02:21.999 ********** 2026-03-27 00:50:15.105578 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.105584 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.105590 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.105595 | orchestrator | 2026-03-27 00:50:15.105601 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-27 00:50:15.105608 | orchestrator | Friday 27 March 2026 00:47:52 +0000 (0:00:00.328) 0:02:22.328 ********** 2026-03-27 00:50:15.105614 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.105620 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.105626 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.105632 | orchestrator | 2026-03-27 00:50:15.105638 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-27 00:50:15.105644 | orchestrator | Friday 27 March 2026 00:47:53 +0000 (0:00:00.770) 0:02:23.098 ********** 2026-03-27 00:50:15.105650 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.105660 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.105666 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.105672 | orchestrator | 2026-03-27 00:50:15.105678 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-27 00:50:15.105684 | orchestrator | Friday 27 March 2026 00:47:53 +0000 (0:00:00.661) 0:02:23.760 ********** 2026-03-27 00:50:15.105690 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-27 00:50:15.105697 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-27 00:50:15.105703 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-27 00:50:15.105709 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-27 00:50:15.105715 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-27 00:50:15.105721 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-27 00:50:15.105727 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-27 00:50:15.105733 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-27 00:50:15.105739 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-27 00:50:15.105746 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-27 00:50:15.105752 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-27 00:50:15.105757 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-27 00:50:15.105763 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-27 00:50:15.105769 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-27 00:50:15.105775 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-27 00:50:15.105781 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-27 00:50:15.105788 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-27 00:50:15.105794 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-27 00:50:15.105803 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-27 00:50:15.105810 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-27 00:50:15.105816 | orchestrator | 2026-03-27 00:50:15.105822 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-27 00:50:15.105828 | orchestrator | 2026-03-27 00:50:15.105834 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-27 00:50:15.105846 | orchestrator | Friday 27 March 2026 00:47:56 +0000 (0:00:02.803) 0:02:26.563 ********** 2026-03-27 00:50:15.105900 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:50:15.105907 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:50:15.105913 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:50:15.105919 | orchestrator | 2026-03-27 00:50:15.105925 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-27 00:50:15.105931 | orchestrator | Friday 27 March 2026 00:47:56 +0000 (0:00:00.295) 0:02:26.858 ********** 2026-03-27 00:50:15.105937 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:50:15.105943 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:50:15.105949 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:50:15.105955 | orchestrator | 2026-03-27 00:50:15.105961 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-27 00:50:15.105968 | orchestrator | Friday 27 March 2026 00:47:58 +0000 (0:00:01.534) 0:02:28.393 ********** 2026-03-27 00:50:15.105973 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:50:15.105980 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:50:15.105986 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:50:15.105992 | orchestrator | 2026-03-27 00:50:15.106050 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-27 00:50:15.106057 | orchestrator | Friday 27 March 2026 00:47:58 +0000 (0:00:00.467) 0:02:28.860 ********** 2026-03-27 00:50:15.106064 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:50:15.106071 | orchestrator | 2026-03-27 00:50:15.106078 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-27 00:50:15.106085 | orchestrator | Friday 27 March 2026 00:47:59 +0000 (0:00:00.498) 0:02:29.359 ********** 2026-03-27 00:50:15.106091 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.106098 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.106105 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.106112 | orchestrator | 2026-03-27 00:50:15.106118 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-27 00:50:15.106125 | orchestrator | Friday 27 March 2026 00:47:59 +0000 (0:00:00.299) 0:02:29.658 ********** 2026-03-27 00:50:15.106132 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.106138 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.106145 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.106152 | orchestrator | 2026-03-27 00:50:15.106159 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-27 00:50:15.106170 | orchestrator | Friday 27 March 2026 00:48:00 +0000 (0:00:00.438) 0:02:30.097 ********** 2026-03-27 00:50:15.106177 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.106183 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.106190 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.106196 | orchestrator | 2026-03-27 00:50:15.106203 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-27 00:50:15.106210 | orchestrator | Friday 27 March 2026 00:48:00 +0000 (0:00:00.355) 0:02:30.452 ********** 2026-03-27 00:50:15.106217 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:50:15.106223 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:50:15.106230 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:50:15.106237 | orchestrator | 2026-03-27 00:50:15.106244 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-27 00:50:15.106255 | orchestrator | Friday 27 March 2026 00:48:01 +0000 (0:00:00.776) 0:02:31.228 ********** 2026-03-27 00:50:15.106262 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:50:15.106270 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:50:15.106277 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:50:15.106283 | orchestrator | 2026-03-27 00:50:15.106290 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-27 00:50:15.106297 | orchestrator | Friday 27 March 2026 00:48:02 +0000 (0:00:01.186) 0:02:32.414 ********** 2026-03-27 00:50:15.106304 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:50:15.106310 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:50:15.106317 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:50:15.106324 | orchestrator | 2026-03-27 00:50:15.106331 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-27 00:50:15.106338 | orchestrator | Friday 27 March 2026 00:48:03 +0000 (0:00:01.592) 0:02:34.007 ********** 2026-03-27 00:50:15.106345 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:50:15.106352 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:50:15.106357 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:50:15.106361 | orchestrator | 2026-03-27 00:50:15.106365 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-27 00:50:15.106369 | orchestrator | 2026-03-27 00:50:15.106372 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-27 00:50:15.106376 | orchestrator | Friday 27 March 2026 00:48:14 +0000 (0:00:10.887) 0:02:44.894 ********** 2026-03-27 00:50:15.106380 | orchestrator | ok: [testbed-manager] 2026-03-27 00:50:15.106384 | orchestrator | 2026-03-27 00:50:15.106387 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-27 00:50:15.106391 | orchestrator | Friday 27 March 2026 00:48:15 +0000 (0:00:00.935) 0:02:45.830 ********** 2026-03-27 00:50:15.106395 | orchestrator | changed: [testbed-manager] 2026-03-27 00:50:15.106399 | orchestrator | 2026-03-27 00:50:15.106402 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-27 00:50:15.106406 | orchestrator | Friday 27 March 2026 00:48:16 +0000 (0:00:00.610) 0:02:46.441 ********** 2026-03-27 00:50:15.106410 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-27 00:50:15.106413 | orchestrator | 2026-03-27 00:50:15.106417 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-27 00:50:15.106421 | orchestrator | Friday 27 March 2026 00:48:16 +0000 (0:00:00.537) 0:02:46.979 ********** 2026-03-27 00:50:15.106433 | orchestrator | changed: [testbed-manager] 2026-03-27 00:50:15.106446 | orchestrator | 2026-03-27 00:50:15.106455 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-27 00:50:15.106460 | orchestrator | Friday 27 March 2026 00:48:18 +0000 (0:00:01.200) 0:02:48.180 ********** 2026-03-27 00:50:15.106465 | orchestrator | changed: [testbed-manager] 2026-03-27 00:50:15.106471 | orchestrator | 2026-03-27 00:50:15.106477 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-27 00:50:15.106488 | orchestrator | Friday 27 March 2026 00:48:18 +0000 (0:00:00.716) 0:02:48.896 ********** 2026-03-27 00:50:15.106495 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-27 00:50:15.106501 | orchestrator | 2026-03-27 00:50:15.106508 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-27 00:50:15.106514 | orchestrator | Friday 27 March 2026 00:48:20 +0000 (0:00:02.010) 0:02:50.906 ********** 2026-03-27 00:50:15.106519 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-27 00:50:15.106523 | orchestrator | 2026-03-27 00:50:15.106527 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-27 00:50:15.106531 | orchestrator | Friday 27 March 2026 00:48:21 +0000 (0:00:00.749) 0:02:51.655 ********** 2026-03-27 00:50:15.106534 | orchestrator | changed: [testbed-manager] 2026-03-27 00:50:15.106538 | orchestrator | 2026-03-27 00:50:15.106542 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-27 00:50:15.106551 | orchestrator | Friday 27 March 2026 00:48:21 +0000 (0:00:00.374) 0:02:52.030 ********** 2026-03-27 00:50:15.106555 | orchestrator | changed: [testbed-manager] 2026-03-27 00:50:15.106559 | orchestrator | 2026-03-27 00:50:15.106562 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-27 00:50:15.106566 | orchestrator | 2026-03-27 00:50:15.106570 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-27 00:50:15.106574 | orchestrator | Friday 27 March 2026 00:48:22 +0000 (0:00:00.352) 0:02:52.382 ********** 2026-03-27 00:50:15.106577 | orchestrator | ok: [testbed-manager] 2026-03-27 00:50:15.106581 | orchestrator | 2026-03-27 00:50:15.106585 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-27 00:50:15.106589 | orchestrator | Friday 27 March 2026 00:48:22 +0000 (0:00:00.142) 0:02:52.524 ********** 2026-03-27 00:50:15.106592 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-27 00:50:15.106596 | orchestrator | 2026-03-27 00:50:15.106600 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-27 00:50:15.106604 | orchestrator | Friday 27 March 2026 00:48:22 +0000 (0:00:00.208) 0:02:52.733 ********** 2026-03-27 00:50:15.106607 | orchestrator | ok: [testbed-manager] 2026-03-27 00:50:15.106611 | orchestrator | 2026-03-27 00:50:15.106615 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-27 00:50:15.106619 | orchestrator | Friday 27 March 2026 00:48:23 +0000 (0:00:01.235) 0:02:53.968 ********** 2026-03-27 00:50:15.106626 | orchestrator | ok: [testbed-manager] 2026-03-27 00:50:15.106630 | orchestrator | 2026-03-27 00:50:15.106634 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-27 00:50:15.106637 | orchestrator | Friday 27 March 2026 00:48:25 +0000 (0:00:01.499) 0:02:55.468 ********** 2026-03-27 00:50:15.106641 | orchestrator | changed: [testbed-manager] 2026-03-27 00:50:15.106645 | orchestrator | 2026-03-27 00:50:15.106649 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-27 00:50:15.106653 | orchestrator | Friday 27 March 2026 00:48:26 +0000 (0:00:00.911) 0:02:56.380 ********** 2026-03-27 00:50:15.106656 | orchestrator | ok: [testbed-manager] 2026-03-27 00:50:15.106660 | orchestrator | 2026-03-27 00:50:15.106664 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-27 00:50:15.106667 | orchestrator | Friday 27 March 2026 00:48:26 +0000 (0:00:00.648) 0:02:57.028 ********** 2026-03-27 00:50:15.106671 | orchestrator | changed: [testbed-manager] 2026-03-27 00:50:15.106675 | orchestrator | 2026-03-27 00:50:15.106679 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-27 00:50:15.106682 | orchestrator | Friday 27 March 2026 00:48:34 +0000 (0:00:07.547) 0:03:04.576 ********** 2026-03-27 00:50:15.106686 | orchestrator | changed: [testbed-manager] 2026-03-27 00:50:15.106690 | orchestrator | 2026-03-27 00:50:15.106693 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-27 00:50:15.106697 | orchestrator | Friday 27 March 2026 00:48:47 +0000 (0:00:12.720) 0:03:17.297 ********** 2026-03-27 00:50:15.106701 | orchestrator | ok: [testbed-manager] 2026-03-27 00:50:15.106705 | orchestrator | 2026-03-27 00:50:15.106709 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-27 00:50:15.106712 | orchestrator | 2026-03-27 00:50:15.106716 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-27 00:50:15.106720 | orchestrator | Friday 27 March 2026 00:48:47 +0000 (0:00:00.499) 0:03:17.796 ********** 2026-03-27 00:50:15.106723 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.106727 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.106731 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.106735 | orchestrator | 2026-03-27 00:50:15.106738 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-27 00:50:15.106742 | orchestrator | Friday 27 March 2026 00:48:48 +0000 (0:00:00.619) 0:03:18.415 ********** 2026-03-27 00:50:15.106746 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.106753 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.106756 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.106760 | orchestrator | 2026-03-27 00:50:15.106764 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-27 00:50:15.106768 | orchestrator | Friday 27 March 2026 00:48:48 +0000 (0:00:00.429) 0:03:18.845 ********** 2026-03-27 00:50:15.106772 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:50:15.106775 | orchestrator | 2026-03-27 00:50:15.106779 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-27 00:50:15.106783 | orchestrator | Friday 27 March 2026 00:48:49 +0000 (0:00:00.484) 0:03:19.329 ********** 2026-03-27 00:50:15.106787 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-27 00:50:15.106791 | orchestrator | 2026-03-27 00:50:15.106794 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-27 00:50:15.106798 | orchestrator | Friday 27 March 2026 00:48:50 +0000 (0:00:00.800) 0:03:20.130 ********** 2026-03-27 00:50:15.106802 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 00:50:15.106806 | orchestrator | 2026-03-27 00:50:15.106811 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-27 00:50:15.106820 | orchestrator | Friday 27 March 2026 00:48:50 +0000 (0:00:00.840) 0:03:20.970 ********** 2026-03-27 00:50:15.106826 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.106831 | orchestrator | 2026-03-27 00:50:15.106837 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-27 00:50:15.106843 | orchestrator | Friday 27 March 2026 00:48:51 +0000 (0:00:00.217) 0:03:21.187 ********** 2026-03-27 00:50:15.106859 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 00:50:15.106865 | orchestrator | 2026-03-27 00:50:15.106871 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-27 00:50:15.106878 | orchestrator | Friday 27 March 2026 00:48:52 +0000 (0:00:01.165) 0:03:22.352 ********** 2026-03-27 00:50:15.106884 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.106890 | orchestrator | 2026-03-27 00:50:15.106896 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-27 00:50:15.106901 | orchestrator | Friday 27 March 2026 00:48:52 +0000 (0:00:00.144) 0:03:22.497 ********** 2026-03-27 00:50:15.106907 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.106913 | orchestrator | 2026-03-27 00:50:15.106919 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-27 00:50:15.106927 | orchestrator | Friday 27 March 2026 00:48:52 +0000 (0:00:00.114) 0:03:22.611 ********** 2026-03-27 00:50:15.106933 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.106940 | orchestrator | 2026-03-27 00:50:15.106947 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-27 00:50:15.106952 | orchestrator | Friday 27 March 2026 00:48:52 +0000 (0:00:00.103) 0:03:22.714 ********** 2026-03-27 00:50:15.106956 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.106959 | orchestrator | 2026-03-27 00:50:15.106963 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-27 00:50:15.106967 | orchestrator | Friday 27 March 2026 00:48:52 +0000 (0:00:00.110) 0:03:22.825 ********** 2026-03-27 00:50:15.106971 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-27 00:50:15.106974 | orchestrator | 2026-03-27 00:50:15.106978 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-27 00:50:15.106982 | orchestrator | Friday 27 March 2026 00:48:57 +0000 (0:00:04.769) 0:03:27.595 ********** 2026-03-27 00:50:15.106986 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-27 00:50:15.106994 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-27 00:50:15.106999 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-27 00:50:15.107002 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-27 00:50:15.107011 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-27 00:50:15.107015 | orchestrator | 2026-03-27 00:50:15.107019 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-27 00:50:15.107023 | orchestrator | Friday 27 March 2026 00:49:45 +0000 (0:00:48.379) 0:04:15.974 ********** 2026-03-27 00:50:15.107026 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 00:50:15.107030 | orchestrator | 2026-03-27 00:50:15.107034 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-27 00:50:15.107038 | orchestrator | Friday 27 March 2026 00:49:47 +0000 (0:00:01.361) 0:04:17.336 ********** 2026-03-27 00:50:15.107041 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-27 00:50:15.107045 | orchestrator | 2026-03-27 00:50:15.107049 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-27 00:50:15.107054 | orchestrator | Friday 27 March 2026 00:49:49 +0000 (0:00:01.869) 0:04:19.205 ********** 2026-03-27 00:50:15.107060 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-27 00:50:15.107066 | orchestrator | 2026-03-27 00:50:15.107072 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-27 00:50:15.107079 | orchestrator | Friday 27 March 2026 00:49:50 +0000 (0:00:01.207) 0:04:20.413 ********** 2026-03-27 00:50:15.107085 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.107091 | orchestrator | 2026-03-27 00:50:15.107098 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-27 00:50:15.107104 | orchestrator | Friday 27 March 2026 00:49:50 +0000 (0:00:00.097) 0:04:20.510 ********** 2026-03-27 00:50:15.107111 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-27 00:50:15.107117 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-27 00:50:15.107124 | orchestrator | 2026-03-27 00:50:15.107130 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-27 00:50:15.107136 | orchestrator | Friday 27 March 2026 00:49:52 +0000 (0:00:01.958) 0:04:22.468 ********** 2026-03-27 00:50:15.107140 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.107144 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.107148 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.107151 | orchestrator | 2026-03-27 00:50:15.107155 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-27 00:50:15.107159 | orchestrator | Friday 27 March 2026 00:49:52 +0000 (0:00:00.410) 0:04:22.879 ********** 2026-03-27 00:50:15.107163 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.107167 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.107170 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.107174 | orchestrator | 2026-03-27 00:50:15.107178 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-27 00:50:15.107181 | orchestrator | 2026-03-27 00:50:15.107185 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-27 00:50:15.107189 | orchestrator | Friday 27 March 2026 00:49:53 +0000 (0:00:00.799) 0:04:23.678 ********** 2026-03-27 00:50:15.107192 | orchestrator | ok: [testbed-manager] 2026-03-27 00:50:15.107196 | orchestrator | 2026-03-27 00:50:15.107200 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-27 00:50:15.107203 | orchestrator | Friday 27 March 2026 00:49:53 +0000 (0:00:00.138) 0:04:23.817 ********** 2026-03-27 00:50:15.107211 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-27 00:50:15.107215 | orchestrator | 2026-03-27 00:50:15.107220 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-27 00:50:15.107226 | orchestrator | Friday 27 March 2026 00:49:54 +0000 (0:00:00.347) 0:04:24.164 ********** 2026-03-27 00:50:15.107232 | orchestrator | changed: [testbed-manager] 2026-03-27 00:50:15.107238 | orchestrator | 2026-03-27 00:50:15.107244 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-27 00:50:15.107255 | orchestrator | 2026-03-27 00:50:15.107261 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-27 00:50:15.107266 | orchestrator | Friday 27 March 2026 00:49:59 +0000 (0:00:04.930) 0:04:29.094 ********** 2026-03-27 00:50:15.107272 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:50:15.107277 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:50:15.107283 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:50:15.107289 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:50:15.107295 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:50:15.107301 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:50:15.107306 | orchestrator | 2026-03-27 00:50:15.107313 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-27 00:50:15.107319 | orchestrator | Friday 27 March 2026 00:49:59 +0000 (0:00:00.637) 0:04:29.731 ********** 2026-03-27 00:50:15.107326 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-27 00:50:15.107332 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-27 00:50:15.107337 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-27 00:50:15.107341 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-27 00:50:15.107344 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-27 00:50:15.107348 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-27 00:50:15.107352 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-27 00:50:15.107356 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-27 00:50:15.107365 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-27 00:50:15.107369 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-27 00:50:15.107373 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-27 00:50:15.107377 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-27 00:50:15.107380 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-27 00:50:15.107384 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-27 00:50:15.107388 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-27 00:50:15.107392 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-27 00:50:15.107395 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-27 00:50:15.107399 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-27 00:50:15.107403 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-27 00:50:15.107407 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-27 00:50:15.107410 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-27 00:50:15.107414 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-27 00:50:15.107418 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-27 00:50:15.107421 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-27 00:50:15.107425 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-27 00:50:15.107429 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-27 00:50:15.107432 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-27 00:50:15.107440 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-27 00:50:15.107444 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-27 00:50:15.107448 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-27 00:50:15.107452 | orchestrator | 2026-03-27 00:50:15.107455 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-27 00:50:15.107459 | orchestrator | Friday 27 March 2026 00:50:11 +0000 (0:00:11.907) 0:04:41.639 ********** 2026-03-27 00:50:15.107463 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.107467 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.107471 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.107474 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.107478 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.107482 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.107486 | orchestrator | 2026-03-27 00:50:15.107489 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-27 00:50:15.107496 | orchestrator | Friday 27 March 2026 00:50:12 +0000 (0:00:00.498) 0:04:42.138 ********** 2026-03-27 00:50:15.107500 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:50:15.107504 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:50:15.107508 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:50:15.107511 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:50:15.107515 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:50:15.107519 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:50:15.107522 | orchestrator | 2026-03-27 00:50:15.107526 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:50:15.107530 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:50:15.107535 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-27 00:50:15.107539 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-27 00:50:15.107543 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-27 00:50:15.107547 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-27 00:50:15.107551 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-27 00:50:15.107554 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-27 00:50:15.107558 | orchestrator | 2026-03-27 00:50:15.107562 | orchestrator | 2026-03-27 00:50:15.107566 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:50:15.107573 | orchestrator | Friday 27 March 2026 00:50:12 +0000 (0:00:00.529) 0:04:42.667 ********** 2026-03-27 00:50:15.107576 | orchestrator | =============================================================================== 2026-03-27 00:50:15.107580 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.98s 2026-03-27 00:50:15.107584 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 48.38s 2026-03-27 00:50:15.107588 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.92s 2026-03-27 00:50:15.107591 | orchestrator | kubectl : Install required packages ------------------------------------ 12.72s 2026-03-27 00:50:15.107595 | orchestrator | Manage labels ---------------------------------------------------------- 11.91s 2026-03-27 00:50:15.107602 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.89s 2026-03-27 00:50:15.107606 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.55s 2026-03-27 00:50:15.107609 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.27s 2026-03-27 00:50:15.107613 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.93s 2026-03-27 00:50:15.107617 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.77s 2026-03-27 00:50:15.107621 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.80s 2026-03-27 00:50:15.107625 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.62s 2026-03-27 00:50:15.107628 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.60s 2026-03-27 00:50:15.107632 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.55s 2026-03-27 00:50:15.107636 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.12s 2026-03-27 00:50:15.107639 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.01s 2026-03-27 00:50:15.107643 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.97s 2026-03-27 00:50:15.107648 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.96s 2026-03-27 00:50:15.107655 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.96s 2026-03-27 00:50:15.107665 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.94s 2026-03-27 00:50:15.107672 | orchestrator | 2026-03-27 00:50:15 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:15.107678 | orchestrator | 2026-03-27 00:50:15 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:15.108208 | orchestrator | 2026-03-27 00:50:15 | INFO  | Task 23282705-0742-463a-acb8-350d27053ca4 is in state STARTED 2026-03-27 00:50:15.110417 | orchestrator | 2026-03-27 00:50:15 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:15.110465 | orchestrator | 2026-03-27 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:18.136640 | orchestrator | 2026-03-27 00:50:18 | INFO  | Task e4885535-b5d3-4c2f-b375-6ff7a9617bcb is in state STARTED 2026-03-27 00:50:18.137805 | orchestrator | 2026-03-27 00:50:18 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:18.138670 | orchestrator | 2026-03-27 00:50:18 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:18.139605 | orchestrator | 2026-03-27 00:50:18 | INFO  | Task 23282705-0742-463a-acb8-350d27053ca4 is in state STARTED 2026-03-27 00:50:18.140284 | orchestrator | 2026-03-27 00:50:18 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:18.140302 | orchestrator | 2026-03-27 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:21.169368 | orchestrator | 2026-03-27 00:50:21 | INFO  | Task e4885535-b5d3-4c2f-b375-6ff7a9617bcb is in state STARTED 2026-03-27 00:50:21.169441 | orchestrator | 2026-03-27 00:50:21 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:21.170128 | orchestrator | 2026-03-27 00:50:21 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:21.170534 | orchestrator | 2026-03-27 00:50:21 | INFO  | Task 23282705-0742-463a-acb8-350d27053ca4 is in state SUCCESS 2026-03-27 00:50:21.171272 | orchestrator | 2026-03-27 00:50:21 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:21.171317 | orchestrator | 2026-03-27 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:24.203964 | orchestrator | 2026-03-27 00:50:24 | INFO  | Task e4885535-b5d3-4c2f-b375-6ff7a9617bcb is in state SUCCESS 2026-03-27 00:50:24.204053 | orchestrator | 2026-03-27 00:50:24 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:24.204574 | orchestrator | 2026-03-27 00:50:24 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:24.205058 | orchestrator | 2026-03-27 00:50:24 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:24.205221 | orchestrator | 2026-03-27 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:27.249219 | orchestrator | 2026-03-27 00:50:27 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:27.249816 | orchestrator | 2026-03-27 00:50:27 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:27.250946 | orchestrator | 2026-03-27 00:50:27 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:27.250981 | orchestrator | 2026-03-27 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:30.278902 | orchestrator | 2026-03-27 00:50:30 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:30.279193 | orchestrator | 2026-03-27 00:50:30 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:30.280021 | orchestrator | 2026-03-27 00:50:30 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:30.280072 | orchestrator | 2026-03-27 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:33.315809 | orchestrator | 2026-03-27 00:50:33 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:33.315939 | orchestrator | 2026-03-27 00:50:33 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:33.316179 | orchestrator | 2026-03-27 00:50:33 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:33.316191 | orchestrator | 2026-03-27 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:36.348188 | orchestrator | 2026-03-27 00:50:36 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:36.349663 | orchestrator | 2026-03-27 00:50:36 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:36.352611 | orchestrator | 2026-03-27 00:50:36 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:36.352677 | orchestrator | 2026-03-27 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:39.398755 | orchestrator | 2026-03-27 00:50:39 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:39.400148 | orchestrator | 2026-03-27 00:50:39 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:39.400618 | orchestrator | 2026-03-27 00:50:39 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:39.400672 | orchestrator | 2026-03-27 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:42.437444 | orchestrator | 2026-03-27 00:50:42 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:42.439215 | orchestrator | 2026-03-27 00:50:42 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:42.443233 | orchestrator | 2026-03-27 00:50:42 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:42.443354 | orchestrator | 2026-03-27 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:45.479666 | orchestrator | 2026-03-27 00:50:45 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:45.481443 | orchestrator | 2026-03-27 00:50:45 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:45.483633 | orchestrator | 2026-03-27 00:50:45 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:45.483735 | orchestrator | 2026-03-27 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:48.519184 | orchestrator | 2026-03-27 00:50:48 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:48.520750 | orchestrator | 2026-03-27 00:50:48 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:48.523034 | orchestrator | 2026-03-27 00:50:48 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:48.523582 | orchestrator | 2026-03-27 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:51.559263 | orchestrator | 2026-03-27 00:50:51 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:51.559353 | orchestrator | 2026-03-27 00:50:51 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:51.561166 | orchestrator | 2026-03-27 00:50:51 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:51.561226 | orchestrator | 2026-03-27 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:54.585437 | orchestrator | 2026-03-27 00:50:54 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:54.585525 | orchestrator | 2026-03-27 00:50:54 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:54.585533 | orchestrator | 2026-03-27 00:50:54 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:54.585541 | orchestrator | 2026-03-27 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:50:57.831262 | orchestrator | 2026-03-27 00:50:57 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:50:57.831375 | orchestrator | 2026-03-27 00:50:57 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:50:57.831388 | orchestrator | 2026-03-27 00:50:57 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:50:57.831397 | orchestrator | 2026-03-27 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:00.867178 | orchestrator | 2026-03-27 00:51:00 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:51:00.867480 | orchestrator | 2026-03-27 00:51:00 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:00.868359 | orchestrator | 2026-03-27 00:51:00 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:00.868408 | orchestrator | 2026-03-27 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:03.900509 | orchestrator | 2026-03-27 00:51:03 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state STARTED 2026-03-27 00:51:03.903706 | orchestrator | 2026-03-27 00:51:03 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:03.906125 | orchestrator | 2026-03-27 00:51:03 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:03.906198 | orchestrator | 2026-03-27 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:06.955458 | orchestrator | 2026-03-27 00:51:06.955600 | orchestrator | 2026-03-27 00:51:06.955612 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-27 00:51:06.955619 | orchestrator | 2026-03-27 00:51:06.955626 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-27 00:51:06.955632 | orchestrator | Friday 27 March 2026 00:50:15 +0000 (0:00:00.230) 0:00:00.230 ********** 2026-03-27 00:51:06.955639 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-27 00:51:06.955645 | orchestrator | 2026-03-27 00:51:06.955654 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-27 00:51:06.955681 | orchestrator | Friday 27 March 2026 00:50:16 +0000 (0:00:00.989) 0:00:01.219 ********** 2026-03-27 00:51:06.955733 | orchestrator | changed: [testbed-manager] 2026-03-27 00:51:06.955746 | orchestrator | 2026-03-27 00:51:06.955756 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-27 00:51:06.955841 | orchestrator | Friday 27 March 2026 00:50:18 +0000 (0:00:01.289) 0:00:02.509 ********** 2026-03-27 00:51:06.955851 | orchestrator | changed: [testbed-manager] 2026-03-27 00:51:06.955860 | orchestrator | 2026-03-27 00:51:06.955870 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:51:06.955880 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:51:06.955940 | orchestrator | 2026-03-27 00:51:06.955952 | orchestrator | 2026-03-27 00:51:06.955963 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:51:06.955974 | orchestrator | Friday 27 March 2026 00:50:18 +0000 (0:00:00.402) 0:00:02.911 ********** 2026-03-27 00:51:06.955985 | orchestrator | =============================================================================== 2026-03-27 00:51:06.955996 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.29s 2026-03-27 00:51:06.956008 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.99s 2026-03-27 00:51:06.956019 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2026-03-27 00:51:06.956029 | orchestrator | 2026-03-27 00:51:06.956040 | orchestrator | 2026-03-27 00:51:06.956052 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-27 00:51:06.956062 | orchestrator | 2026-03-27 00:51:06.956075 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-27 00:51:06.956087 | orchestrator | Friday 27 March 2026 00:50:15 +0000 (0:00:00.217) 0:00:00.217 ********** 2026-03-27 00:51:06.956097 | orchestrator | ok: [testbed-manager] 2026-03-27 00:51:06.956110 | orchestrator | 2026-03-27 00:51:06.956121 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-27 00:51:06.956131 | orchestrator | Friday 27 March 2026 00:50:16 +0000 (0:00:00.819) 0:00:01.036 ********** 2026-03-27 00:51:06.956141 | orchestrator | ok: [testbed-manager] 2026-03-27 00:51:06.956152 | orchestrator | 2026-03-27 00:51:06.956219 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-27 00:51:06.956228 | orchestrator | Friday 27 March 2026 00:50:17 +0000 (0:00:00.515) 0:00:01.552 ********** 2026-03-27 00:51:06.956235 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-27 00:51:06.956243 | orchestrator | 2026-03-27 00:51:06.956249 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-27 00:51:06.956255 | orchestrator | Friday 27 March 2026 00:50:18 +0000 (0:00:00.918) 0:00:02.470 ********** 2026-03-27 00:51:06.956260 | orchestrator | changed: [testbed-manager] 2026-03-27 00:51:06.956266 | orchestrator | 2026-03-27 00:51:06.956272 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-27 00:51:06.956278 | orchestrator | Friday 27 March 2026 00:50:19 +0000 (0:00:01.053) 0:00:03.524 ********** 2026-03-27 00:51:06.956284 | orchestrator | changed: [testbed-manager] 2026-03-27 00:51:06.956289 | orchestrator | 2026-03-27 00:51:06.956295 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-27 00:51:06.956320 | orchestrator | Friday 27 March 2026 00:50:19 +0000 (0:00:00.490) 0:00:04.015 ********** 2026-03-27 00:51:06.956326 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-27 00:51:06.956332 | orchestrator | 2026-03-27 00:51:06.956338 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-27 00:51:06.956344 | orchestrator | Friday 27 March 2026 00:50:21 +0000 (0:00:01.522) 0:00:05.537 ********** 2026-03-27 00:51:06.956349 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-27 00:51:06.956355 | orchestrator | 2026-03-27 00:51:06.956361 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-27 00:51:06.956366 | orchestrator | Friday 27 March 2026 00:50:22 +0000 (0:00:00.810) 0:00:06.348 ********** 2026-03-27 00:51:06.956372 | orchestrator | ok: [testbed-manager] 2026-03-27 00:51:06.956378 | orchestrator | 2026-03-27 00:51:06.956383 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-27 00:51:06.956389 | orchestrator | Friday 27 March 2026 00:50:22 +0000 (0:00:00.405) 0:00:06.754 ********** 2026-03-27 00:51:06.956395 | orchestrator | ok: [testbed-manager] 2026-03-27 00:51:06.956400 | orchestrator | 2026-03-27 00:51:06.956406 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:51:06.956412 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:51:06.956418 | orchestrator | 2026-03-27 00:51:06.956424 | orchestrator | 2026-03-27 00:51:06.956430 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:51:06.956435 | orchestrator | Friday 27 March 2026 00:50:22 +0000 (0:00:00.317) 0:00:07.072 ********** 2026-03-27 00:51:06.956441 | orchestrator | =============================================================================== 2026-03-27 00:51:06.956447 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2026-03-27 00:51:06.956452 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.05s 2026-03-27 00:51:06.956489 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.92s 2026-03-27 00:51:06.956515 | orchestrator | Get home directory of operator user ------------------------------------- 0.82s 2026-03-27 00:51:06.956521 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.81s 2026-03-27 00:51:06.956527 | orchestrator | Create .kube directory -------------------------------------------------- 0.52s 2026-03-27 00:51:06.956532 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.49s 2026-03-27 00:51:06.956538 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2026-03-27 00:51:06.956552 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.32s 2026-03-27 00:51:06.956557 | orchestrator | 2026-03-27 00:51:06.956563 | orchestrator | 2026-03-27 00:51:06.956569 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:51:06.956575 | orchestrator | 2026-03-27 00:51:06.956580 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:51:06.956586 | orchestrator | Friday 27 March 2026 00:48:52 +0000 (0:00:00.157) 0:00:00.157 ********** 2026-03-27 00:51:06.956592 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:51:06.956597 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:51:06.956603 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:51:06.956609 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.956614 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.956620 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.956626 | orchestrator | 2026-03-27 00:51:06.956631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:51:06.956637 | orchestrator | Friday 27 March 2026 00:48:53 +0000 (0:00:00.730) 0:00:00.888 ********** 2026-03-27 00:51:06.956643 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-27 00:51:06.956649 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-27 00:51:06.956665 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-27 00:51:06.956671 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-27 00:51:06.956676 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-27 00:51:06.956682 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-27 00:51:06.956688 | orchestrator | 2026-03-27 00:51:06.956693 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-27 00:51:06.956699 | orchestrator | 2026-03-27 00:51:06.956708 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-27 00:51:06.956718 | orchestrator | Friday 27 March 2026 00:48:54 +0000 (0:00:01.176) 0:00:02.065 ********** 2026-03-27 00:51:06.956743 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:51:06.956755 | orchestrator | 2026-03-27 00:51:06.956786 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-27 00:51:06.956795 | orchestrator | Friday 27 March 2026 00:48:55 +0000 (0:00:00.906) 0:00:02.971 ********** 2026-03-27 00:51:06.956806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956879 | orchestrator | 2026-03-27 00:51:06.956888 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-27 00:51:06.956896 | orchestrator | Friday 27 March 2026 00:48:57 +0000 (0:00:01.978) 0:00:04.950 ********** 2026-03-27 00:51:06.956913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956932 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.956967 | orchestrator | 2026-03-27 00:51:06.956977 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-27 00:51:06.956986 | orchestrator | Friday 27 March 2026 00:49:00 +0000 (0:00:02.370) 0:00:07.320 ********** 2026-03-27 00:51:06.956995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957024 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957067 | orchestrator | 2026-03-27 00:51:06.957076 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-27 00:51:06.957084 | orchestrator | Friday 27 March 2026 00:49:02 +0000 (0:00:02.003) 0:00:09.323 ********** 2026-03-27 00:51:06.957092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957165 | orchestrator | 2026-03-27 00:51:06.957174 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-27 00:51:06.957184 | orchestrator | Friday 27 March 2026 00:49:03 +0000 (0:00:01.645) 0:00:10.969 ********** 2026-03-27 00:51:06.957193 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.957251 | orchestrator | 2026-03-27 00:51:06.957261 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-27 00:51:06.957271 | orchestrator | Friday 27 March 2026 00:49:05 +0000 (0:00:01.559) 0:00:12.528 ********** 2026-03-27 00:51:06.957281 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:51:06.957291 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:51:06.957300 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:51:06.957308 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:51:06.957324 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:51:06.957334 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:51:06.957343 | orchestrator | 2026-03-27 00:51:06.957352 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-27 00:51:06.957362 | orchestrator | Friday 27 March 2026 00:49:08 +0000 (0:00:02.703) 0:00:15.232 ********** 2026-03-27 00:51:06.957371 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-27 00:51:06.957381 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-27 00:51:06.957390 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-27 00:51:06.957412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-27 00:51:06.957423 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-27 00:51:06.957432 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-27 00:51:06.957441 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-27 00:51:06.957451 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-27 00:51:06.957465 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-27 00:51:06.957474 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-27 00:51:06.957483 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-27 00:51:06.957492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-27 00:51:06.957501 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-27 00:51:06.957511 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-27 00:51:06.957517 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-27 00:51:06.957523 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-27 00:51:06.957528 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-27 00:51:06.957534 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-27 00:51:06.957540 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-27 00:51:06.957547 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-27 00:51:06.957553 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-27 00:51:06.957559 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-27 00:51:06.957565 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-27 00:51:06.957570 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-27 00:51:06.957576 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-27 00:51:06.957582 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-27 00:51:06.957587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-27 00:51:06.957599 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-27 00:51:06.957605 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-27 00:51:06.957611 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-27 00:51:06.957617 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-27 00:51:06.957623 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-27 00:51:06.957629 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-27 00:51:06.957638 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-27 00:51:06.957648 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-27 00:51:06.957657 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-27 00:51:06.957666 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-27 00:51:06.957675 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-27 00:51:06.957684 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-27 00:51:06.957693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-27 00:51:06.957701 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-27 00:51:06.957717 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-27 00:51:06.957727 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-27 00:51:06.957739 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-27 00:51:06.957754 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-27 00:51:06.957790 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-27 00:51:06.957796 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-27 00:51:06.957802 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-27 00:51:06.957808 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-27 00:51:06.957814 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-27 00:51:06.957820 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-27 00:51:06.957826 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-27 00:51:06.957831 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-27 00:51:06.957837 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-27 00:51:06.957843 | orchestrator | 2026-03-27 00:51:06.957849 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-27 00:51:06.957866 | orchestrator | Friday 27 March 2026 00:49:25 +0000 (0:00:17.594) 0:00:32.826 ********** 2026-03-27 00:51:06.957872 | orchestrator | 2026-03-27 00:51:06.957878 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-27 00:51:06.957883 | orchestrator | Friday 27 March 2026 00:49:25 +0000 (0:00:00.066) 0:00:32.892 ********** 2026-03-27 00:51:06.957889 | orchestrator | 2026-03-27 00:51:06.957895 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-27 00:51:06.957900 | orchestrator | Friday 27 March 2026 00:49:25 +0000 (0:00:00.063) 0:00:32.956 ********** 2026-03-27 00:51:06.957906 | orchestrator | 2026-03-27 00:51:06.957912 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-27 00:51:06.957918 | orchestrator | Friday 27 March 2026 00:49:25 +0000 (0:00:00.062) 0:00:33.019 ********** 2026-03-27 00:51:06.957923 | orchestrator | 2026-03-27 00:51:06.957929 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-27 00:51:06.957935 | orchestrator | Friday 27 March 2026 00:49:25 +0000 (0:00:00.064) 0:00:33.084 ********** 2026-03-27 00:51:06.957941 | orchestrator | 2026-03-27 00:51:06.957946 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-27 00:51:06.957952 | orchestrator | Friday 27 March 2026 00:49:25 +0000 (0:00:00.063) 0:00:33.148 ********** 2026-03-27 00:51:06.957958 | orchestrator | 2026-03-27 00:51:06.957963 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-27 00:51:06.957969 | orchestrator | Friday 27 March 2026 00:49:26 +0000 (0:00:00.079) 0:00:33.227 ********** 2026-03-27 00:51:06.957975 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:51:06.957981 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.957987 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:51:06.957993 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:51:06.957999 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.958004 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.958010 | orchestrator | 2026-03-27 00:51:06.958082 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-27 00:51:06.958092 | orchestrator | Friday 27 March 2026 00:49:28 +0000 (0:00:02.021) 0:00:35.249 ********** 2026-03-27 00:51:06.958102 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:51:06.958111 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:51:06.958120 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:51:06.958129 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:51:06.958138 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:51:06.958147 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:51:06.958157 | orchestrator | 2026-03-27 00:51:06.958166 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-27 00:51:06.958175 | orchestrator | 2026-03-27 00:51:06.958184 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-27 00:51:06.958193 | orchestrator | Friday 27 March 2026 00:49:54 +0000 (0:00:26.642) 0:01:01.892 ********** 2026-03-27 00:51:06.958203 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:51:06.958212 | orchestrator | 2026-03-27 00:51:06.958222 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-27 00:51:06.958232 | orchestrator | Friday 27 March 2026 00:49:55 +0000 (0:00:00.458) 0:01:02.350 ********** 2026-03-27 00:51:06.958242 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:51:06.958251 | orchestrator | 2026-03-27 00:51:06.958270 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-27 00:51:06.958276 | orchestrator | Friday 27 March 2026 00:49:56 +0000 (0:00:00.891) 0:01:03.241 ********** 2026-03-27 00:51:06.958282 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.958288 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.958294 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.958300 | orchestrator | 2026-03-27 00:51:06.958306 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-27 00:51:06.958318 | orchestrator | Friday 27 March 2026 00:49:57 +0000 (0:00:01.137) 0:01:04.378 ********** 2026-03-27 00:51:06.958324 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.958334 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.958340 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.958346 | orchestrator | 2026-03-27 00:51:06.958351 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-27 00:51:06.958357 | orchestrator | Friday 27 March 2026 00:49:57 +0000 (0:00:00.419) 0:01:04.798 ********** 2026-03-27 00:51:06.958363 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.958369 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.958374 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.958380 | orchestrator | 2026-03-27 00:51:06.958386 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-27 00:51:06.958391 | orchestrator | Friday 27 March 2026 00:49:58 +0000 (0:00:00.463) 0:01:05.262 ********** 2026-03-27 00:51:06.958397 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.958403 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.958408 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.958414 | orchestrator | 2026-03-27 00:51:06.958420 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-27 00:51:06.958426 | orchestrator | Friday 27 March 2026 00:49:58 +0000 (0:00:00.311) 0:01:05.574 ********** 2026-03-27 00:51:06.958432 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.958437 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.958443 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.958449 | orchestrator | 2026-03-27 00:51:06.958454 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-27 00:51:06.958460 | orchestrator | Friday 27 March 2026 00:49:58 +0000 (0:00:00.378) 0:01:05.952 ********** 2026-03-27 00:51:06.958466 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958472 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958477 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958483 | orchestrator | 2026-03-27 00:51:06.958489 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-27 00:51:06.958494 | orchestrator | Friday 27 March 2026 00:49:59 +0000 (0:00:00.285) 0:01:06.237 ********** 2026-03-27 00:51:06.958500 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958506 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958512 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958517 | orchestrator | 2026-03-27 00:51:06.958523 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-27 00:51:06.958529 | orchestrator | Friday 27 March 2026 00:49:59 +0000 (0:00:00.309) 0:01:06.546 ********** 2026-03-27 00:51:06.958534 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958540 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958545 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958551 | orchestrator | 2026-03-27 00:51:06.958557 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-27 00:51:06.958562 | orchestrator | Friday 27 March 2026 00:49:59 +0000 (0:00:00.512) 0:01:07.059 ********** 2026-03-27 00:51:06.958568 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958574 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958580 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958585 | orchestrator | 2026-03-27 00:51:06.958591 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-27 00:51:06.958597 | orchestrator | Friday 27 March 2026 00:50:00 +0000 (0:00:00.391) 0:01:07.450 ********** 2026-03-27 00:51:06.958602 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958608 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958614 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958619 | orchestrator | 2026-03-27 00:51:06.958625 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-27 00:51:06.958631 | orchestrator | Friday 27 March 2026 00:50:00 +0000 (0:00:00.354) 0:01:07.805 ********** 2026-03-27 00:51:06.958641 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958647 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958653 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958658 | orchestrator | 2026-03-27 00:51:06.958664 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-27 00:51:06.958670 | orchestrator | Friday 27 March 2026 00:50:01 +0000 (0:00:00.474) 0:01:08.279 ********** 2026-03-27 00:51:06.958675 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958681 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958687 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958693 | orchestrator | 2026-03-27 00:51:06.958698 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-27 00:51:06.958704 | orchestrator | Friday 27 March 2026 00:50:01 +0000 (0:00:00.448) 0:01:08.728 ********** 2026-03-27 00:51:06.958710 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958715 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958721 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958726 | orchestrator | 2026-03-27 00:51:06.958732 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-27 00:51:06.958738 | orchestrator | Friday 27 March 2026 00:50:01 +0000 (0:00:00.250) 0:01:08.978 ********** 2026-03-27 00:51:06.958743 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958749 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958755 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958793 | orchestrator | 2026-03-27 00:51:06.958803 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-27 00:51:06.958809 | orchestrator | Friday 27 March 2026 00:50:02 +0000 (0:00:00.408) 0:01:09.387 ********** 2026-03-27 00:51:06.958815 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958820 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958826 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958832 | orchestrator | 2026-03-27 00:51:06.958838 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-27 00:51:06.958848 | orchestrator | Friday 2026-03-27 00:51:06 | INFO  | Task 5079a31a-4ad9-4102-8fd6-1328027ac151 is in state SUCCESS 2026-03-27 00:51:06.958854 | orchestrator | 27 March 2026 00:50:02 +0000 (0:00:00.356) 0:01:09.743 ********** 2026-03-27 00:51:06.958860 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958866 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958871 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958877 | orchestrator | 2026-03-27 00:51:06.958883 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-27 00:51:06.958893 | orchestrator | Friday 27 March 2026 00:50:03 +0000 (0:00:00.444) 0:01:10.188 ********** 2026-03-27 00:51:06.958899 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.958905 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.958910 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.958916 | orchestrator | 2026-03-27 00:51:06.958921 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-27 00:51:06.958927 | orchestrator | Friday 27 March 2026 00:50:03 +0000 (0:00:00.322) 0:01:10.511 ********** 2026-03-27 00:51:06.958933 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:51:06.958939 | orchestrator | 2026-03-27 00:51:06.958944 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-27 00:51:06.958950 | orchestrator | Friday 27 March 2026 00:50:03 +0000 (0:00:00.608) 0:01:11.119 ********** 2026-03-27 00:51:06.958956 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.958961 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.958967 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.958973 | orchestrator | 2026-03-27 00:51:06.958979 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-27 00:51:06.958990 | orchestrator | Friday 27 March 2026 00:50:04 +0000 (0:00:00.817) 0:01:11.936 ********** 2026-03-27 00:51:06.958998 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.959006 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.959015 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.959024 | orchestrator | 2026-03-27 00:51:06.959033 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-27 00:51:06.959042 | orchestrator | Friday 27 March 2026 00:50:05 +0000 (0:00:01.228) 0:01:13.165 ********** 2026-03-27 00:51:06.959051 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.959060 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.959066 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.959097 | orchestrator | 2026-03-27 00:51:06.959104 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-27 00:51:06.959110 | orchestrator | Friday 27 March 2026 00:50:06 +0000 (0:00:00.822) 0:01:13.988 ********** 2026-03-27 00:51:06.959115 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.959121 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.959127 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.959132 | orchestrator | 2026-03-27 00:51:06.959138 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-27 00:51:06.959144 | orchestrator | Friday 27 March 2026 00:50:07 +0000 (0:00:00.874) 0:01:14.862 ********** 2026-03-27 00:51:06.959150 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.959155 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.959161 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.959166 | orchestrator | 2026-03-27 00:51:06.959172 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-27 00:51:06.959178 | orchestrator | Friday 27 March 2026 00:50:08 +0000 (0:00:01.093) 0:01:15.956 ********** 2026-03-27 00:51:06.959184 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.959189 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.959195 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.959201 | orchestrator | 2026-03-27 00:51:06.959206 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-27 00:51:06.959212 | orchestrator | Friday 27 March 2026 00:50:09 +0000 (0:00:00.733) 0:01:16.689 ********** 2026-03-27 00:51:06.959218 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.959224 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.959229 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.959235 | orchestrator | 2026-03-27 00:51:06.959241 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-27 00:51:06.959247 | orchestrator | Friday 27 March 2026 00:50:10 +0000 (0:00:00.530) 0:01:17.220 ********** 2026-03-27 00:51:06.959253 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.959259 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.959264 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.959270 | orchestrator | 2026-03-27 00:51:06.959276 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-27 00:51:06.959281 | orchestrator | Friday 27 March 2026 00:50:10 +0000 (0:00:00.265) 0:01:17.487 ********** 2026-03-27 00:51:06.959289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959409 | orchestrator | 2026-03-27 00:51:06.959419 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-27 00:51:06.959430 | orchestrator | Friday 27 March 2026 00:50:12 +0000 (0:00:01.863) 0:01:19.350 ********** 2026-03-27 00:51:06.959437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959506 | orchestrator | 2026-03-27 00:51:06.959511 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-27 00:51:06.959517 | orchestrator | Friday 27 March 2026 00:50:16 +0000 (0:00:04.675) 0:01:24.026 ********** 2026-03-27 00:51:06.959523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.959590 | orchestrator | 2026-03-27 00:51:06.959596 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-27 00:51:06.959602 | orchestrator | Friday 27 March 2026 00:50:18 +0000 (0:00:02.057) 0:01:26.083 ********** 2026-03-27 00:51:06.959608 | orchestrator | 2026-03-27 00:51:06.959613 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-27 00:51:06.959619 | orchestrator | Friday 27 March 2026 00:50:18 +0000 (0:00:00.067) 0:01:26.151 ********** 2026-03-27 00:51:06.959625 | orchestrator | 2026-03-27 00:51:06.959631 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-27 00:51:06.959637 | orchestrator | Friday 27 March 2026 00:50:19 +0000 (0:00:00.071) 0:01:26.222 ********** 2026-03-27 00:51:06.959642 | orchestrator | 2026-03-27 00:51:06.959648 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-27 00:51:06.959654 | orchestrator | Friday 27 March 2026 00:50:19 +0000 (0:00:00.061) 0:01:26.284 ********** 2026-03-27 00:51:06.959660 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:51:06.959666 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:51:06.959672 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:51:06.959677 | orchestrator | 2026-03-27 00:51:06.959688 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-27 00:51:06.959694 | orchestrator | Friday 27 March 2026 00:50:21 +0000 (0:00:02.503) 0:01:28.787 ********** 2026-03-27 00:51:06.959700 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:51:06.959705 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:51:06.959711 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:51:06.959717 | orchestrator | 2026-03-27 00:51:06.959723 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-27 00:51:06.959729 | orchestrator | Friday 27 March 2026 00:50:24 +0000 (0:00:02.536) 0:01:31.323 ********** 2026-03-27 00:51:06.959734 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:51:06.959740 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:51:06.959746 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:51:06.959752 | orchestrator | 2026-03-27 00:51:06.959757 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-27 00:51:06.959787 | orchestrator | Friday 27 March 2026 00:50:26 +0000 (0:00:02.501) 0:01:33.825 ********** 2026-03-27 00:51:06.959793 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.959800 | orchestrator | 2026-03-27 00:51:06.959805 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-27 00:51:06.959811 | orchestrator | Friday 27 March 2026 00:50:26 +0000 (0:00:00.104) 0:01:33.929 ********** 2026-03-27 00:51:06.959817 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.959823 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.959829 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.959835 | orchestrator | 2026-03-27 00:51:06.959840 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-27 00:51:06.959846 | orchestrator | Friday 27 March 2026 00:50:27 +0000 (0:00:00.790) 0:01:34.719 ********** 2026-03-27 00:51:06.959852 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.959857 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.959863 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:51:06.959869 | orchestrator | 2026-03-27 00:51:06.959875 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-27 00:51:06.959881 | orchestrator | Friday 27 March 2026 00:50:28 +0000 (0:00:00.659) 0:01:35.379 ********** 2026-03-27 00:51:06.959891 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.959897 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.959903 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.959909 | orchestrator | 2026-03-27 00:51:06.959915 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-27 00:51:06.959921 | orchestrator | Friday 27 March 2026 00:50:29 +0000 (0:00:01.054) 0:01:36.434 ********** 2026-03-27 00:51:06.959926 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.959932 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.959938 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:51:06.959944 | orchestrator | 2026-03-27 00:51:06.959953 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-27 00:51:06.959959 | orchestrator | Friday 27 March 2026 00:50:29 +0000 (0:00:00.629) 0:01:37.063 ********** 2026-03-27 00:51:06.959965 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.959971 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.959977 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.959983 | orchestrator | 2026-03-27 00:51:06.959988 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-27 00:51:06.959994 | orchestrator | Friday 27 March 2026 00:50:30 +0000 (0:00:00.966) 0:01:38.030 ********** 2026-03-27 00:51:06.960000 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.960006 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.960012 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.960017 | orchestrator | 2026-03-27 00:51:06.960023 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-27 00:51:06.960029 | orchestrator | Friday 27 March 2026 00:50:31 +0000 (0:00:00.859) 0:01:38.890 ********** 2026-03-27 00:51:06.960035 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.960046 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.960051 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.960057 | orchestrator | 2026-03-27 00:51:06.960063 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-27 00:51:06.960069 | orchestrator | Friday 27 March 2026 00:50:32 +0000 (0:00:00.395) 0:01:39.285 ********** 2026-03-27 00:51:06.960075 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960081 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960088 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960094 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960101 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960108 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960114 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960125 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960131 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960145 | orchestrator | 2026-03-27 00:51:06.960151 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-27 00:51:06.960157 | orchestrator | Friday 27 March 2026 00:50:33 +0000 (0:00:01.538) 0:01:40.824 ********** 2026-03-27 00:51:06.960164 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960170 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960203 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960210 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960228 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960252 | orchestrator | 2026-03-27 00:51:06.960261 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-27 00:51:06.960267 | orchestrator | Friday 27 March 2026 00:50:37 +0000 (0:00:03.832) 0:01:44.656 ********** 2026-03-27 00:51:06.960276 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960282 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960288 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960300 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960324 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 00:51:06.960330 | orchestrator | 2026-03-27 00:51:06.960336 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-27 00:51:06.960347 | orchestrator | Friday 27 March 2026 00:50:40 +0000 (0:00:02.884) 0:01:47.541 ********** 2026-03-27 00:51:06.960353 | orchestrator | 2026-03-27 00:51:06.960362 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-27 00:51:06.960368 | orchestrator | Friday 27 March 2026 00:50:40 +0000 (0:00:00.083) 0:01:47.624 ********** 2026-03-27 00:51:06.960374 | orchestrator | 2026-03-27 00:51:06.960379 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-27 00:51:06.960385 | orchestrator | Friday 27 March 2026 00:50:40 +0000 (0:00:00.091) 0:01:47.715 ********** 2026-03-27 00:51:06.960391 | orchestrator | 2026-03-27 00:51:06.960397 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-27 00:51:06.960405 | orchestrator | Friday 27 March 2026 00:50:40 +0000 (0:00:00.169) 0:01:47.885 ********** 2026-03-27 00:51:06.960411 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:51:06.960417 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:51:06.960422 | orchestrator | 2026-03-27 00:51:06.960428 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-27 00:51:06.960434 | orchestrator | Friday 27 March 2026 00:50:46 +0000 (0:00:06.113) 0:01:53.999 ********** 2026-03-27 00:51:06.960440 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:51:06.960445 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:51:06.960451 | orchestrator | 2026-03-27 00:51:06.960457 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-27 00:51:06.960463 | orchestrator | Friday 27 March 2026 00:50:52 +0000 (0:00:06.095) 0:02:00.094 ********** 2026-03-27 00:51:06.960468 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:51:06.960474 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:51:06.960479 | orchestrator | 2026-03-27 00:51:06.960485 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-27 00:51:06.960491 | orchestrator | Friday 27 March 2026 00:50:59 +0000 (0:00:06.386) 0:02:06.481 ********** 2026-03-27 00:51:06.960496 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:51:06.960502 | orchestrator | 2026-03-27 00:51:06.960508 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-27 00:51:06.960513 | orchestrator | Friday 27 March 2026 00:50:59 +0000 (0:00:00.115) 0:02:06.597 ********** 2026-03-27 00:51:06.960519 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.960525 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.960530 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.960536 | orchestrator | 2026-03-27 00:51:06.960542 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-27 00:51:06.960547 | orchestrator | Friday 27 March 2026 00:51:00 +0000 (0:00:00.766) 0:02:07.364 ********** 2026-03-27 00:51:06.960553 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.960558 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.960564 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:51:06.960572 | orchestrator | 2026-03-27 00:51:06.960581 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-27 00:51:06.960591 | orchestrator | Friday 27 March 2026 00:51:00 +0000 (0:00:00.658) 0:02:08.022 ********** 2026-03-27 00:51:06.960601 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.960611 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.960619 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.960629 | orchestrator | 2026-03-27 00:51:06.960637 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-27 00:51:06.960646 | orchestrator | Friday 27 March 2026 00:51:01 +0000 (0:00:00.786) 0:02:08.809 ********** 2026-03-27 00:51:06.960657 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:51:06.960666 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:51:06.960675 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:51:06.960685 | orchestrator | 2026-03-27 00:51:06.960695 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-27 00:51:06.960704 | orchestrator | Friday 27 March 2026 00:51:02 +0000 (0:00:00.604) 0:02:09.414 ********** 2026-03-27 00:51:06.960722 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.960732 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.960742 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.960751 | orchestrator | 2026-03-27 00:51:06.960783 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-27 00:51:06.960793 | orchestrator | Friday 27 March 2026 00:51:02 +0000 (0:00:00.725) 0:02:10.139 ********** 2026-03-27 00:51:06.960803 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:51:06.960813 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:51:06.960822 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:51:06.960832 | orchestrator | 2026-03-27 00:51:06.960840 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:51:06.960846 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-27 00:51:06.960852 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-27 00:51:06.960858 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-27 00:51:06.960864 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:51:06.960870 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:51:06.960876 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:51:06.960881 | orchestrator | 2026-03-27 00:51:06.960887 | orchestrator | 2026-03-27 00:51:06.960893 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:51:06.960899 | orchestrator | Friday 27 March 2026 00:51:04 +0000 (0:00:01.301) 0:02:11.440 ********** 2026-03-27 00:51:06.960905 | orchestrator | =============================================================================== 2026-03-27 00:51:06.960917 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.64s 2026-03-27 00:51:06.960923 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.59s 2026-03-27 00:51:06.960929 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.89s 2026-03-27 00:51:06.960935 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.63s 2026-03-27 00:51:06.960940 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.62s 2026-03-27 00:51:06.960950 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.68s 2026-03-27 00:51:06.960956 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.83s 2026-03-27 00:51:06.960962 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.88s 2026-03-27 00:51:06.960968 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.70s 2026-03-27 00:51:06.960974 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.37s 2026-03-27 00:51:06.960979 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.06s 2026-03-27 00:51:06.960985 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.02s 2026-03-27 00:51:06.960991 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.00s 2026-03-27 00:51:06.960996 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.98s 2026-03-27 00:51:06.961002 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.86s 2026-03-27 00:51:06.961008 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.65s 2026-03-27 00:51:06.961014 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.56s 2026-03-27 00:51:06.961025 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.54s 2026-03-27 00:51:06.961031 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.30s 2026-03-27 00:51:06.961036 | orchestrator | ovn-db : Set bootstrap args fact for SB (new cluster) ------------------- 1.23s 2026-03-27 00:51:06.961042 | orchestrator | 2026-03-27 00:51:06 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:06.961048 | orchestrator | 2026-03-27 00:51:06 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:06.961054 | orchestrator | 2026-03-27 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:09.982001 | orchestrator | 2026-03-27 00:51:09 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:09.982297 | orchestrator | 2026-03-27 00:51:09 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:09.982530 | orchestrator | 2026-03-27 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:13.022264 | orchestrator | 2026-03-27 00:51:13 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:13.023678 | orchestrator | 2026-03-27 00:51:13 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:13.023741 | orchestrator | 2026-03-27 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:16.070952 | orchestrator | 2026-03-27 00:51:16 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:16.073188 | orchestrator | 2026-03-27 00:51:16 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:16.073369 | orchestrator | 2026-03-27 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:19.113722 | orchestrator | 2026-03-27 00:51:19 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:19.114708 | orchestrator | 2026-03-27 00:51:19 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:19.114993 | orchestrator | 2026-03-27 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:22.149664 | orchestrator | 2026-03-27 00:51:22 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:22.149889 | orchestrator | 2026-03-27 00:51:22 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:22.149908 | orchestrator | 2026-03-27 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:25.192293 | orchestrator | 2026-03-27 00:51:25 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:25.194396 | orchestrator | 2026-03-27 00:51:25 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:25.194473 | orchestrator | 2026-03-27 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:28.236602 | orchestrator | 2026-03-27 00:51:28 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:28.238633 | orchestrator | 2026-03-27 00:51:28 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:28.239251 | orchestrator | 2026-03-27 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:31.283635 | orchestrator | 2026-03-27 00:51:31 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:31.285217 | orchestrator | 2026-03-27 00:51:31 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:31.285283 | orchestrator | 2026-03-27 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:34.327841 | orchestrator | 2026-03-27 00:51:34 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:34.329180 | orchestrator | 2026-03-27 00:51:34 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:34.329340 | orchestrator | 2026-03-27 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:37.370564 | orchestrator | 2026-03-27 00:51:37 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:37.371187 | orchestrator | 2026-03-27 00:51:37 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:37.371216 | orchestrator | 2026-03-27 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:40.419187 | orchestrator | 2026-03-27 00:51:40 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:40.421180 | orchestrator | 2026-03-27 00:51:40 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:40.422165 | orchestrator | 2026-03-27 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:43.464269 | orchestrator | 2026-03-27 00:51:43 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:43.467331 | orchestrator | 2026-03-27 00:51:43 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:43.467814 | orchestrator | 2026-03-27 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:46.520442 | orchestrator | 2026-03-27 00:51:46 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:46.522678 | orchestrator | 2026-03-27 00:51:46 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:46.522872 | orchestrator | 2026-03-27 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:49.545670 | orchestrator | 2026-03-27 00:51:49 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:49.545929 | orchestrator | 2026-03-27 00:51:49 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:49.545966 | orchestrator | 2026-03-27 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:52.580928 | orchestrator | 2026-03-27 00:51:52 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:52.583595 | orchestrator | 2026-03-27 00:51:52 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:52.583675 | orchestrator | 2026-03-27 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:55.631032 | orchestrator | 2026-03-27 00:51:55 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:55.633333 | orchestrator | 2026-03-27 00:51:55 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:55.633486 | orchestrator | 2026-03-27 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:51:58.670215 | orchestrator | 2026-03-27 00:51:58 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:51:58.670999 | orchestrator | 2026-03-27 00:51:58 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:51:58.671048 | orchestrator | 2026-03-27 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:01.702450 | orchestrator | 2026-03-27 00:52:01 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:01.704580 | orchestrator | 2026-03-27 00:52:01 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:01.704806 | orchestrator | 2026-03-27 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:04.753342 | orchestrator | 2026-03-27 00:52:04 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:04.754209 | orchestrator | 2026-03-27 00:52:04 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:04.754256 | orchestrator | 2026-03-27 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:07.795253 | orchestrator | 2026-03-27 00:52:07 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:07.795710 | orchestrator | 2026-03-27 00:52:07 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:07.795756 | orchestrator | 2026-03-27 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:10.842996 | orchestrator | 2026-03-27 00:52:10 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:10.843783 | orchestrator | 2026-03-27 00:52:10 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:10.843854 | orchestrator | 2026-03-27 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:13.884679 | orchestrator | 2026-03-27 00:52:13 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:13.884757 | orchestrator | 2026-03-27 00:52:13 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:13.884764 | orchestrator | 2026-03-27 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:16.937773 | orchestrator | 2026-03-27 00:52:16 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:16.939321 | orchestrator | 2026-03-27 00:52:16 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:16.939455 | orchestrator | 2026-03-27 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:19.984370 | orchestrator | 2026-03-27 00:52:19 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:19.986289 | orchestrator | 2026-03-27 00:52:19 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:19.986341 | orchestrator | 2026-03-27 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:23.024189 | orchestrator | 2026-03-27 00:52:23 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:23.024432 | orchestrator | 2026-03-27 00:52:23 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:23.024449 | orchestrator | 2026-03-27 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:26.060162 | orchestrator | 2026-03-27 00:52:26 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:26.061215 | orchestrator | 2026-03-27 00:52:26 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:26.061991 | orchestrator | 2026-03-27 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:29.105764 | orchestrator | 2026-03-27 00:52:29 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:29.107341 | orchestrator | 2026-03-27 00:52:29 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:29.107420 | orchestrator | 2026-03-27 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:32.155565 | orchestrator | 2026-03-27 00:52:32 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:32.156089 | orchestrator | 2026-03-27 00:52:32 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:32.156115 | orchestrator | 2026-03-27 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:35.194206 | orchestrator | 2026-03-27 00:52:35 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:35.195485 | orchestrator | 2026-03-27 00:52:35 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:35.195522 | orchestrator | 2026-03-27 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:38.239074 | orchestrator | 2026-03-27 00:52:38 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:38.239895 | orchestrator | 2026-03-27 00:52:38 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:38.239930 | orchestrator | 2026-03-27 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:41.292310 | orchestrator | 2026-03-27 00:52:41 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:41.293739 | orchestrator | 2026-03-27 00:52:41 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:41.293779 | orchestrator | 2026-03-27 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:44.353731 | orchestrator | 2026-03-27 00:52:44 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:44.355552 | orchestrator | 2026-03-27 00:52:44 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:44.355664 | orchestrator | 2026-03-27 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:47.399667 | orchestrator | 2026-03-27 00:52:47 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:47.401215 | orchestrator | 2026-03-27 00:52:47 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:47.401253 | orchestrator | 2026-03-27 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:50.469405 | orchestrator | 2026-03-27 00:52:50 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:50.472614 | orchestrator | 2026-03-27 00:52:50 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:50.472664 | orchestrator | 2026-03-27 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:53.529383 | orchestrator | 2026-03-27 00:52:53 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:53.531258 | orchestrator | 2026-03-27 00:52:53 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:53.531324 | orchestrator | 2026-03-27 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:56.576879 | orchestrator | 2026-03-27 00:52:56 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:56.578284 | orchestrator | 2026-03-27 00:52:56 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:56.578354 | orchestrator | 2026-03-27 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:52:59.634224 | orchestrator | 2026-03-27 00:52:59 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:52:59.635335 | orchestrator | 2026-03-27 00:52:59 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:52:59.635379 | orchestrator | 2026-03-27 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:02.684740 | orchestrator | 2026-03-27 00:53:02 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:02.684825 | orchestrator | 2026-03-27 00:53:02 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:02.684938 | orchestrator | 2026-03-27 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:05.721906 | orchestrator | 2026-03-27 00:53:05 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:05.722265 | orchestrator | 2026-03-27 00:53:05 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:05.722380 | orchestrator | 2026-03-27 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:08.763941 | orchestrator | 2026-03-27 00:53:08 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:08.765626 | orchestrator | 2026-03-27 00:53:08 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:08.765688 | orchestrator | 2026-03-27 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:11.801686 | orchestrator | 2026-03-27 00:53:11 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:11.802902 | orchestrator | 2026-03-27 00:53:11 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:11.803175 | orchestrator | 2026-03-27 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:14.828968 | orchestrator | 2026-03-27 00:53:14 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:14.829622 | orchestrator | 2026-03-27 00:53:14 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:14.829825 | orchestrator | 2026-03-27 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:17.863491 | orchestrator | 2026-03-27 00:53:17 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:17.865293 | orchestrator | 2026-03-27 00:53:17 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:17.865500 | orchestrator | 2026-03-27 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:20.908445 | orchestrator | 2026-03-27 00:53:20 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:20.910588 | orchestrator | 2026-03-27 00:53:20 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:20.910630 | orchestrator | 2026-03-27 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:23.951785 | orchestrator | 2026-03-27 00:53:23 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:23.954668 | orchestrator | 2026-03-27 00:53:23 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:23.954712 | orchestrator | 2026-03-27 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:27.000191 | orchestrator | 2026-03-27 00:53:26 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:27.001100 | orchestrator | 2026-03-27 00:53:27 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:27.001184 | orchestrator | 2026-03-27 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:30.058713 | orchestrator | 2026-03-27 00:53:30 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:30.059241 | orchestrator | 2026-03-27 00:53:30 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:30.059327 | orchestrator | 2026-03-27 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:33.101826 | orchestrator | 2026-03-27 00:53:33 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:33.103749 | orchestrator | 2026-03-27 00:53:33 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:33.103836 | orchestrator | 2026-03-27 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:36.157396 | orchestrator | 2026-03-27 00:53:36 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:36.157728 | orchestrator | 2026-03-27 00:53:36 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:36.157809 | orchestrator | 2026-03-27 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:39.212098 | orchestrator | 2026-03-27 00:53:39 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:39.212165 | orchestrator | 2026-03-27 00:53:39 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:39.212170 | orchestrator | 2026-03-27 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:42.258523 | orchestrator | 2026-03-27 00:53:42 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:42.260226 | orchestrator | 2026-03-27 00:53:42 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:42.260330 | orchestrator | 2026-03-27 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:45.310435 | orchestrator | 2026-03-27 00:53:45 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:45.311784 | orchestrator | 2026-03-27 00:53:45 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:45.311834 | orchestrator | 2026-03-27 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:48.356357 | orchestrator | 2026-03-27 00:53:48 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:48.358691 | orchestrator | 2026-03-27 00:53:48 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:48.358761 | orchestrator | 2026-03-27 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:51.400194 | orchestrator | 2026-03-27 00:53:51 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:51.401032 | orchestrator | 2026-03-27 00:53:51 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:51.401063 | orchestrator | 2026-03-27 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:54.439557 | orchestrator | 2026-03-27 00:53:54 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state STARTED 2026-03-27 00:53:54.439987 | orchestrator | 2026-03-27 00:53:54 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:54.440016 | orchestrator | 2026-03-27 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:57.490050 | orchestrator | 2026-03-27 00:53:57 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:53:57.497323 | orchestrator | 2026-03-27 00:53:57 | INFO  | Task 4168b5c1-08e3-4a89-a1b6-1b937681cbfc is in state SUCCESS 2026-03-27 00:53:57.499846 | orchestrator | 2026-03-27 00:53:57.499959 | orchestrator | 2026-03-27 00:53:57.499971 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:53:57.499981 | orchestrator | 2026-03-27 00:53:57.500014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:53:57.500023 | orchestrator | Friday 27 March 2026 00:47:43 +0000 (0:00:00.576) 0:00:00.576 ********** 2026-03-27 00:53:57.500043 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.500050 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.500054 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.500059 | orchestrator | 2026-03-27 00:53:57.500063 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:53:57.500068 | orchestrator | Friday 27 March 2026 00:47:43 +0000 (0:00:00.405) 0:00:00.981 ********** 2026-03-27 00:53:57.500073 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-27 00:53:57.500078 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-27 00:53:57.500082 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-27 00:53:57.500087 | orchestrator | 2026-03-27 00:53:57.500091 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-27 00:53:57.500095 | orchestrator | 2026-03-27 00:53:57.500100 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-27 00:53:57.500104 | orchestrator | Friday 27 March 2026 00:47:44 +0000 (0:00:00.470) 0:00:01.452 ********** 2026-03-27 00:53:57.500109 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.500114 | orchestrator | 2026-03-27 00:53:57.500119 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-27 00:53:57.500123 | orchestrator | Friday 27 March 2026 00:47:45 +0000 (0:00:01.372) 0:00:02.825 ********** 2026-03-27 00:53:57.500128 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.500132 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.500137 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.500141 | orchestrator | 2026-03-27 00:53:57.500146 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-27 00:53:57.500151 | orchestrator | Friday 27 March 2026 00:47:47 +0000 (0:00:01.807) 0:00:04.632 ********** 2026-03-27 00:53:57.500155 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.500160 | orchestrator | 2026-03-27 00:53:57.500164 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-27 00:53:57.500169 | orchestrator | Friday 27 March 2026 00:47:48 +0000 (0:00:00.702) 0:00:05.335 ********** 2026-03-27 00:53:57.500173 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.500178 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.500182 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.500187 | orchestrator | 2026-03-27 00:53:57.500191 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-27 00:53:57.500196 | orchestrator | Friday 27 March 2026 00:47:49 +0000 (0:00:01.734) 0:00:07.070 ********** 2026-03-27 00:53:57.500200 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-27 00:53:57.500205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-27 00:53:57.500209 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-27 00:53:57.500214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-27 00:53:57.500218 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-27 00:53:57.500223 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-27 00:53:57.500227 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-27 00:53:57.500233 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-27 00:53:57.500237 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-27 00:53:57.500247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-27 00:53:57.500251 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-27 00:53:57.500256 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-27 00:53:57.500260 | orchestrator | 2026-03-27 00:53:57.500265 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-27 00:53:57.500269 | orchestrator | Friday 27 March 2026 00:47:52 +0000 (0:00:02.357) 0:00:09.427 ********** 2026-03-27 00:53:57.500274 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-27 00:53:57.500281 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-27 00:53:57.500288 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-27 00:53:57.500300 | orchestrator | 2026-03-27 00:53:57.500308 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-27 00:53:57.500315 | orchestrator | Friday 27 March 2026 00:47:53 +0000 (0:00:00.837) 0:00:10.264 ********** 2026-03-27 00:53:57.500322 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-27 00:53:57.500330 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-27 00:53:57.500338 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-27 00:53:57.500344 | orchestrator | 2026-03-27 00:53:57.500351 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-27 00:53:57.500358 | orchestrator | Friday 27 March 2026 00:47:54 +0000 (0:00:01.262) 0:00:11.526 ********** 2026-03-27 00:53:57.500364 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-27 00:53:57.500371 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.500393 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-27 00:53:57.500400 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.500408 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-27 00:53:57.500414 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.500421 | orchestrator | 2026-03-27 00:53:57.500477 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-27 00:53:57.500490 | orchestrator | Friday 27 March 2026 00:47:55 +0000 (0:00:00.783) 0:00:12.310 ********** 2026-03-27 00:53:57.500499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.500559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.500565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.500570 | orchestrator | 2026-03-27 00:53:57.500575 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-27 00:53:57.500580 | orchestrator | Friday 27 March 2026 00:47:57 +0000 (0:00:01.905) 0:00:14.216 ********** 2026-03-27 00:53:57.500585 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.500594 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.500600 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.500605 | orchestrator | 2026-03-27 00:53:57.500610 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-27 00:53:57.500616 | orchestrator | Friday 27 March 2026 00:47:57 +0000 (0:00:00.830) 0:00:15.046 ********** 2026-03-27 00:53:57.500621 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-27 00:53:57.500626 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-27 00:53:57.500632 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-27 00:53:57.500636 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-27 00:53:57.500641 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-27 00:53:57.500645 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-27 00:53:57.500649 | orchestrator | 2026-03-27 00:53:57.500654 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-27 00:53:57.500658 | orchestrator | Friday 27 March 2026 00:47:59 +0000 (0:00:01.964) 0:00:17.010 ********** 2026-03-27 00:53:57.500663 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.500667 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.500671 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.500676 | orchestrator | 2026-03-27 00:53:57.500680 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-27 00:53:57.500685 | orchestrator | Friday 27 March 2026 00:48:01 +0000 (0:00:01.470) 0:00:18.481 ********** 2026-03-27 00:53:57.500689 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.500694 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.500698 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.500703 | orchestrator | 2026-03-27 00:53:57.500707 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-27 00:53:57.500712 | orchestrator | Friday 27 March 2026 00:48:02 +0000 (0:00:01.581) 0:00:20.063 ********** 2026-03-27 00:53:57.500716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.500732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.500740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.500745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-27 00:53:57.500753 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.500758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.500763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.500768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.500772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-27 00:53:57.500777 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.500789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.500794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.500807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.500812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-27 00:53:57.500816 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.500821 | orchestrator | 2026-03-27 00:53:57.500826 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-27 00:53:57.500830 | orchestrator | Friday 27 March 2026 00:48:03 +0000 (0:00:00.813) 0:00:20.877 ********** 2026-03-27 00:53:57.500835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.500900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-27 00:53:57.500906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.500915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-27 00:53:57.500936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.500941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be', '__omit_place_holder__ca63302b9392e157e6dd0c84b77a753107b377be'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-27 00:53:57.500946 | orchestrator | 2026-03-27 00:53:57.500950 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-27 00:53:57.500955 | orchestrator | Friday 27 March 2026 00:48:07 +0000 (0:00:03.653) 0:00:24.531 ********** 2026-03-27 00:53:57.500960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.500997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.501002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.501006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.501011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.501016 | orchestrator | 2026-03-27 00:53:57.501021 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-27 00:53:57.501028 | orchestrator | Friday 27 March 2026 00:48:10 +0000 (0:00:02.986) 0:00:27.517 ********** 2026-03-27 00:53:57.501039 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-27 00:53:57.501049 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-27 00:53:57.501056 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-27 00:53:57.501063 | orchestrator | 2026-03-27 00:53:57.501070 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-27 00:53:57.501082 | orchestrator | Friday 27 March 2026 00:48:11 +0000 (0:00:01.449) 0:00:28.967 ********** 2026-03-27 00:53:57.501089 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-27 00:53:57.501096 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-27 00:53:57.501103 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-27 00:53:57.501111 | orchestrator | 2026-03-27 00:53:57.501396 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-27 00:53:57.501417 | orchestrator | Friday 27 March 2026 00:48:14 +0000 (0:00:02.793) 0:00:31.761 ********** 2026-03-27 00:53:57.501422 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.501427 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.501432 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.501437 | orchestrator | 2026-03-27 00:53:57.501442 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-27 00:53:57.501446 | orchestrator | Friday 27 March 2026 00:48:16 +0000 (0:00:02.209) 0:00:33.970 ********** 2026-03-27 00:53:57.501451 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-27 00:53:57.501456 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-27 00:53:57.501461 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-27 00:53:57.501466 | orchestrator | 2026-03-27 00:53:57.501470 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-27 00:53:57.501475 | orchestrator | Friday 27 March 2026 00:48:19 +0000 (0:00:02.719) 0:00:36.690 ********** 2026-03-27 00:53:57.501479 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-27 00:53:57.501484 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-27 00:53:57.501489 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-27 00:53:57.501493 | orchestrator | 2026-03-27 00:53:57.501498 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-27 00:53:57.501502 | orchestrator | Friday 27 March 2026 00:48:21 +0000 (0:00:01.892) 0:00:38.582 ********** 2026-03-27 00:53:57.501507 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-27 00:53:57.501511 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-27 00:53:57.501516 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-27 00:53:57.501520 | orchestrator | 2026-03-27 00:53:57.501525 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-27 00:53:57.501529 | orchestrator | Friday 27 March 2026 00:48:22 +0000 (0:00:01.547) 0:00:40.130 ********** 2026-03-27 00:53:57.501534 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-27 00:53:57.501539 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-27 00:53:57.501543 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-27 00:53:57.501548 | orchestrator | 2026-03-27 00:53:57.501552 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-27 00:53:57.501557 | orchestrator | Friday 27 March 2026 00:48:24 +0000 (0:00:01.714) 0:00:41.844 ********** 2026-03-27 00:53:57.501561 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.501566 | orchestrator | 2026-03-27 00:53:57.501570 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-27 00:53:57.501574 | orchestrator | Friday 27 March 2026 00:48:25 +0000 (0:00:01.094) 0:00:42.939 ********** 2026-03-27 00:53:57.501587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.501593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.501629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.501635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.501640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.501645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.501650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.501659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.501664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.501669 | orchestrator | 2026-03-27 00:53:57.501673 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-27 00:53:57.501678 | orchestrator | Friday 27 March 2026 00:48:30 +0000 (0:00:04.334) 0:00:47.274 ********** 2026-03-27 00:53:57.501690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.501695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.501700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.501704 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.501709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.501719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.501723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.501728 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.501733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.501743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.501748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.501753 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.501757 | orchestrator | 2026-03-27 00:53:57.501762 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-27 00:53:57.501766 | orchestrator | Friday 27 March 2026 00:48:30 +0000 (0:00:00.768) 0:00:48.042 ********** 2026-03-27 00:53:57.501771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.501782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.501787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.501792 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.501796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.501806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.501812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.501816 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.501821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.501829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.501834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.501838 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.501843 | orchestrator | 2026-03-27 00:53:57.501848 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-27 00:53:57.501852 | orchestrator | Friday 27 March 2026 00:48:32 +0000 (0:00:01.689) 0:00:49.732 ********** 2026-03-27 00:53:57.501857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.501908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.501920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.501928 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.501938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.501957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502143 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.502152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502184 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.502190 | orchestrator | 2026-03-27 00:53:57.502198 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-27 00:53:57.502210 | orchestrator | Friday 27 March 2026 00:48:33 +0000 (0:00:00.654) 0:00:50.386 ********** 2026-03-27 00:53:57.502219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502259 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.502267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502280 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.502296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502331 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.502338 | orchestrator | 2026-03-27 00:53:57.502345 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-27 00:53:57.502352 | orchestrator | Friday 27 March 2026 00:48:33 +0000 (0:00:00.586) 0:00:50.973 ********** 2026-03-27 00:53:57.502360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502383 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.502395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502428 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.502435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502459 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.502464 | orchestrator | 2026-03-27 00:53:57.502469 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-27 00:53:57.502474 | orchestrator | Friday 27 March 2026 00:48:34 +0000 (0:00:01.095) 0:00:52.068 ********** 2026-03-27 00:53:57.502478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502505 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.502510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502525 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.502529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502553 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.502558 | orchestrator | 2026-03-27 00:53:57.502562 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-27 00:53:57.502567 | orchestrator | Friday 27 March 2026 00:48:35 +0000 (0:00:00.534) 0:00:52.603 ********** 2026-03-27 00:53:57.502572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502587 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.502591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502621 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.502626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502640 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.502644 | orchestrator | 2026-03-27 00:53:57.502649 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-27 00:53:57.502653 | orchestrator | Friday 27 March 2026 00:48:35 +0000 (0:00:00.512) 0:00:53.115 ********** 2026-03-27 00:53:57.502658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502676 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.502687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502702 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.502706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-27 00:53:57.502711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-27 00:53:57.502719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-27 00:53:57.502724 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.502729 | orchestrator | 2026-03-27 00:53:57.502733 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-27 00:53:57.502738 | orchestrator | Friday 27 March 2026 00:48:37 +0000 (0:00:01.104) 0:00:54.220 ********** 2026-03-27 00:53:57.502742 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-27 00:53:57.502747 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-27 00:53:57.502756 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-27 00:53:57.502760 | orchestrator | 2026-03-27 00:53:57.502765 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-27 00:53:57.502769 | orchestrator | Friday 27 March 2026 00:48:38 +0000 (0:00:01.538) 0:00:55.758 ********** 2026-03-27 00:53:57.502831 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-27 00:53:57.502838 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-27 00:53:57.502843 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-27 00:53:57.502847 | orchestrator | 2026-03-27 00:53:57.502852 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-27 00:53:57.502857 | orchestrator | Friday 27 March 2026 00:48:39 +0000 (0:00:01.398) 0:00:57.156 ********** 2026-03-27 00:53:57.502879 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-27 00:53:57.502884 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-27 00:53:57.502888 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.502893 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-27 00:53:57.502898 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-27 00:53:57.502903 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-27 00:53:57.502907 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.502912 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-27 00:53:57.502916 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.502938 | orchestrator | 2026-03-27 00:53:57.502942 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-27 00:53:57.502948 | orchestrator | Friday 27 March 2026 00:48:41 +0000 (0:00:01.228) 0:00:58.385 ********** 2026-03-27 00:53:57.502953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.502962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.502967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-27 00:53:57.502978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.502986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.502991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-27 00:53:57.502996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.503005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.503010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-27 00:53:57.503015 | orchestrator | 2026-03-27 00:53:57.503019 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-27 00:53:57.503024 | orchestrator | Friday 27 March 2026 00:48:44 +0000 (0:00:03.657) 0:01:02.043 ********** 2026-03-27 00:53:57.503028 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.503033 | orchestrator | 2026-03-27 00:53:57.503038 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-27 00:53:57.503042 | orchestrator | Friday 27 March 2026 00:48:45 +0000 (0:00:00.510) 0:01:02.554 ********** 2026-03-27 00:53:57.503048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-27 00:53:57.503061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.503066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-27 00:53:57.503085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.503089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-27 00:53:57.503640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.503652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503662 | orchestrator | 2026-03-27 00:53:57.503667 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-27 00:53:57.503671 | orchestrator | Friday 27 March 2026 00:48:49 +0000 (0:00:04.409) 0:01:06.963 ********** 2026-03-27 00:53:57.503676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-27 00:53:57.503690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.503695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503711 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.503717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-27 00:53:57.503721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.503726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503735 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.503748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-27 00:53:57.503757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.503762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.503771 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.503776 | orchestrator | 2026-03-27 00:53:57.503780 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-27 00:53:57.503785 | orchestrator | Friday 27 March 2026 00:48:50 +0000 (0:00:00.904) 0:01:07.867 ********** 2026-03-27 00:53:57.503790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-27 00:53:57.503796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-27 00:53:57.503801 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.503806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-27 00:53:57.503811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-27 00:53:57.503815 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.503820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-27 00:53:57.503825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-27 00:53:57.503830 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.503834 | orchestrator | 2026-03-27 00:53:57.503842 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-27 00:53:57.503846 | orchestrator | Friday 27 March 2026 00:48:51 +0000 (0:00:01.111) 0:01:08.979 ********** 2026-03-27 00:53:57.503851 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.503855 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.503915 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.503921 | orchestrator | 2026-03-27 00:53:57.503926 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-27 00:53:57.503930 | orchestrator | Friday 27 March 2026 00:48:53 +0000 (0:00:02.015) 0:01:10.995 ********** 2026-03-27 00:53:57.503935 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.503939 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.503944 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.503948 | orchestrator | 2026-03-27 00:53:57.503953 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-27 00:53:57.503958 | orchestrator | Friday 27 March 2026 00:48:56 +0000 (0:00:03.132) 0:01:14.127 ********** 2026-03-27 00:53:57.503962 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.503967 | orchestrator | 2026-03-27 00:53:57.503971 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-27 00:53:57.503976 | orchestrator | Friday 27 March 2026 00:48:57 +0000 (0:00:00.642) 0:01:14.770 ********** 2026-03-27 00:53:57.503982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.503987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.504085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.504101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504111 | orchestrator | 2026-03-27 00:53:57.504116 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-27 00:53:57.504121 | orchestrator | Friday 27 March 2026 00:49:02 +0000 (0:00:05.186) 0:01:19.956 ********** 2026-03-27 00:53:57.504136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.504142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504152 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.504157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.504162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504176 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.504187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.504193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.504203 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.504208 | orchestrator | 2026-03-27 00:53:57.504213 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-27 00:53:57.504218 | orchestrator | Friday 27 March 2026 00:49:03 +0000 (0:00:01.069) 0:01:21.026 ********** 2026-03-27 00:53:57.504223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-27 00:53:57.504230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-27 00:53:57.504235 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.504241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-27 00:53:57.504247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-27 00:53:57.504257 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.504262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-27 00:53:57.504268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-27 00:53:57.504274 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.504280 | orchestrator | 2026-03-27 00:53:57.504285 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-27 00:53:57.504291 | orchestrator | Friday 27 March 2026 00:49:04 +0000 (0:00:01.018) 0:01:22.044 ********** 2026-03-27 00:53:57.504296 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.504302 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.504307 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.504313 | orchestrator | 2026-03-27 00:53:57.504318 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-27 00:53:57.504324 | orchestrator | Friday 27 March 2026 00:49:06 +0000 (0:00:01.288) 0:01:23.333 ********** 2026-03-27 00:53:57.504329 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.504335 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.504340 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.504345 | orchestrator | 2026-03-27 00:53:57.504473 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-27 00:53:57.504487 | orchestrator | Friday 27 March 2026 00:49:08 +0000 (0:00:01.895) 0:01:25.229 ********** 2026-03-27 00:53:57.504494 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.504501 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.504512 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.504520 | orchestrator | 2026-03-27 00:53:57.504526 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-27 00:53:57.504533 | orchestrator | Friday 27 March 2026 00:49:08 +0000 (0:00:00.280) 0:01:25.509 ********** 2026-03-27 00:53:57.504540 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.504547 | orchestrator | 2026-03-27 00:53:57.504554 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-27 00:53:57.504561 | orchestrator | Friday 27 March 2026 00:49:09 +0000 (0:00:01.610) 0:01:27.119 ********** 2026-03-27 00:53:57.504570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-27 00:53:57.504580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-27 00:53:57.504595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-27 00:53:57.504604 | orchestrator | 2026-03-27 00:53:57.504610 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-27 00:53:57.504618 | orchestrator | Friday 27 March 2026 00:49:12 +0000 (0:00:02.409) 0:01:29.528 ********** 2026-03-27 00:53:57.504630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-27 00:53:57.504640 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.504649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-27 00:53:57.504654 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.504659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-27 00:53:57.504667 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.504673 | orchestrator | 2026-03-27 00:53:57.504679 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-27 00:53:57.504690 | orchestrator | Friday 27 March 2026 00:49:13 +0000 (0:00:01.365) 0:01:30.894 ********** 2026-03-27 00:53:57.504701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-27 00:53:57.504711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-27 00:53:57.504719 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.504725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-27 00:53:57.504732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-27 00:53:57.504744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-27 00:53:57.504772 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.504787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-27 00:53:57.504795 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.504802 | orchestrator | 2026-03-27 00:53:57.504809 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-27 00:53:57.504816 | orchestrator | Friday 27 March 2026 00:49:15 +0000 (0:00:01.686) 0:01:32.581 ********** 2026-03-27 00:53:57.504823 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.504829 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.504836 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.504973 | orchestrator | 2026-03-27 00:53:57.504981 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-27 00:53:57.504989 | orchestrator | Friday 27 March 2026 00:49:15 +0000 (0:00:00.384) 0:01:32.966 ********** 2026-03-27 00:53:57.504996 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.505004 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.505018 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.505026 | orchestrator | 2026-03-27 00:53:57.505033 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-27 00:53:57.505042 | orchestrator | Friday 27 March 2026 00:49:16 +0000 (0:00:01.070) 0:01:34.036 ********** 2026-03-27 00:53:57.505050 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.505058 | orchestrator | 2026-03-27 00:53:57.505065 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-27 00:53:57.505074 | orchestrator | Friday 27 March 2026 00:49:17 +0000 (0:00:00.851) 0:01:34.887 ********** 2026-03-27 00:53:57.505083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.505093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.505102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.505169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505189 | orchestrator | 2026-03-27 00:53:57.505194 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-27 00:53:57.505199 | orchestrator | Friday 27 March 2026 00:49:20 +0000 (0:00:03.283) 0:01:38.171 ********** 2026-03-27 00:53:57.505205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.505210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.505232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.505237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505258 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.505272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505302 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.505310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505318 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.505325 | orchestrator | 2026-03-27 00:53:57.505333 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-27 00:53:57.505341 | orchestrator | Friday 27 March 2026 00:49:21 +0000 (0:00:00.537) 0:01:38.708 ********** 2026-03-27 00:53:57.505348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-27 00:53:57.505357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-27 00:53:57.505365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-27 00:53:57.505373 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.505380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-27 00:53:57.505393 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.505401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-27 00:53:57.505413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-27 00:53:57.505425 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.505433 | orchestrator | 2026-03-27 00:53:57.505441 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-27 00:53:57.505447 | orchestrator | Friday 27 March 2026 00:49:22 +0000 (0:00:01.008) 0:01:39.716 ********** 2026-03-27 00:53:57.505452 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.505458 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.505463 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.505468 | orchestrator | 2026-03-27 00:53:57.505473 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-27 00:53:57.505479 | orchestrator | Friday 27 March 2026 00:49:23 +0000 (0:00:01.291) 0:01:41.008 ********** 2026-03-27 00:53:57.505484 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.505489 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.505494 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.505500 | orchestrator | 2026-03-27 00:53:57.505505 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-27 00:53:57.505510 | orchestrator | Friday 27 March 2026 00:49:25 +0000 (0:00:01.923) 0:01:42.932 ********** 2026-03-27 00:53:57.505515 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.505520 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.505526 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.505530 | orchestrator | 2026-03-27 00:53:57.505535 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-27 00:53:57.505539 | orchestrator | Friday 27 March 2026 00:49:26 +0000 (0:00:00.319) 0:01:43.251 ********** 2026-03-27 00:53:57.505544 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.505588 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.505593 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.505597 | orchestrator | 2026-03-27 00:53:57.505602 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-27 00:53:57.505607 | orchestrator | Friday 27 March 2026 00:49:26 +0000 (0:00:00.339) 0:01:43.590 ********** 2026-03-27 00:53:57.505611 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.505616 | orchestrator | 2026-03-27 00:53:57.505620 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-27 00:53:57.505625 | orchestrator | Friday 27 March 2026 00:49:27 +0000 (0:00:00.935) 0:01:44.526 ********** 2026-03-27 00:53:57.505634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 00:53:57.505643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 00:53:57.505655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 00:53:57.505883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 00:53:57.505894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 00:53:57.505928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 00:53:57.505944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.505971 | orchestrator | 2026-03-27 00:53:57.505976 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-27 00:53:57.505980 | orchestrator | Friday 27 March 2026 00:49:31 +0000 (0:00:04.159) 0:01:48.686 ********** 2026-03-27 00:53:57.505985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 00:53:57.505995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 00:53:57.506000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506056 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.506065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 00:53:57.506073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 00:53:57.506078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 00:53:57.506091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 00:53:57.506103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506134 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.506138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.506161 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.506165 | orchestrator | 2026-03-27 00:53:57.506172 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-27 00:53:57.506177 | orchestrator | Friday 27 March 2026 00:49:32 +0000 (0:00:01.297) 0:01:49.984 ********** 2026-03-27 00:53:57.506182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-27 00:53:57.506187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-27 00:53:57.506193 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.506197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-27 00:53:57.506202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-27 00:53:57.506209 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.506214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-27 00:53:57.506219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-27 00:53:57.506223 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.506228 | orchestrator | 2026-03-27 00:53:57.506232 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-27 00:53:57.506237 | orchestrator | Friday 27 March 2026 00:49:35 +0000 (0:00:02.235) 0:01:52.219 ********** 2026-03-27 00:53:57.506241 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.506246 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.506250 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.506255 | orchestrator | 2026-03-27 00:53:57.506314 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-27 00:53:57.506322 | orchestrator | Friday 27 March 2026 00:49:36 +0000 (0:00:01.368) 0:01:53.587 ********** 2026-03-27 00:53:57.506329 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.506338 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.506346 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.506355 | orchestrator | 2026-03-27 00:53:57.506363 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-27 00:53:57.506371 | orchestrator | Friday 27 March 2026 00:49:38 +0000 (0:00:02.015) 0:01:55.603 ********** 2026-03-27 00:53:57.506378 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.506385 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.506389 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.506393 | orchestrator | 2026-03-27 00:53:57.506398 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-27 00:53:57.506402 | orchestrator | Friday 27 March 2026 00:49:38 +0000 (0:00:00.295) 0:01:55.898 ********** 2026-03-27 00:53:57.506407 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.506411 | orchestrator | 2026-03-27 00:53:57.506416 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-27 00:53:57.506420 | orchestrator | Friday 27 March 2026 00:49:39 +0000 (0:00:00.966) 0:01:56.865 ********** 2026-03-27 00:53:57.506435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 00:53:57.506446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.506453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 00:53:57.507180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.507221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 00:53:57.507233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.507241 | orchestrator | 2026-03-27 00:53:57.507246 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-27 00:53:57.507250 | orchestrator | Friday 27 March 2026 00:49:43 +0000 (0:00:04.259) 0:02:01.125 ********** 2026-03-27 00:53:57.507272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-27 00:53:57.507283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.507291 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.507296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-27 00:53:57.507307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.507319 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.507326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-27 00:53:57.507341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.507356 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.507362 | orchestrator | 2026-03-27 00:53:57.507368 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-27 00:53:57.507374 | orchestrator | Friday 27 March 2026 00:49:47 +0000 (0:00:03.261) 0:02:04.386 ********** 2026-03-27 00:53:57.507384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-27 00:53:57.507391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-27 00:53:57.507398 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.507404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-27 00:53:57.507410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-27 00:53:57.507417 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.507424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-27 00:53:57.507430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-27 00:53:57.507437 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.507443 | orchestrator | 2026-03-27 00:53:57.507450 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-27 00:53:57.507461 | orchestrator | Friday 27 March 2026 00:49:50 +0000 (0:00:03.630) 0:02:08.017 ********** 2026-03-27 00:53:57.507468 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.507474 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.507481 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.507487 | orchestrator | 2026-03-27 00:53:57.507494 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-27 00:53:57.507501 | orchestrator | Friday 27 March 2026 00:49:51 +0000 (0:00:01.094) 0:02:09.111 ********** 2026-03-27 00:53:57.507505 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.507509 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.507513 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.507517 | orchestrator | 2026-03-27 00:53:57.507521 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-27 00:53:57.507528 | orchestrator | Friday 27 March 2026 00:49:53 +0000 (0:00:01.802) 0:02:10.914 ********** 2026-03-27 00:53:57.507533 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.507537 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.507541 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.507545 | orchestrator | 2026-03-27 00:53:57.507549 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-27 00:53:57.507556 | orchestrator | Friday 27 March 2026 00:49:54 +0000 (0:00:00.299) 0:02:11.213 ********** 2026-03-27 00:53:57.507560 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.507564 | orchestrator | 2026-03-27 00:53:57.507568 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-27 00:53:57.507572 | orchestrator | Friday 27 March 2026 00:49:55 +0000 (0:00:01.024) 0:02:12.238 ********** 2026-03-27 00:53:57.507577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 00:53:57.507621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 00:53:57.507626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 00:53:57.507630 | orchestrator | 2026-03-27 00:53:57.507634 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-27 00:53:57.507638 | orchestrator | Friday 27 March 2026 00:49:59 +0000 (0:00:04.091) 0:02:16.329 ********** 2026-03-27 00:53:57.507646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-27 00:53:57.507651 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.507659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-27 00:53:57.507664 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.507671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-27 00:53:57.507692 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.507697 | orchestrator | 2026-03-27 00:53:57.507701 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-27 00:53:57.507705 | orchestrator | Friday 27 March 2026 00:49:59 +0000 (0:00:00.429) 0:02:16.759 ********** 2026-03-27 00:53:57.507710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-27 00:53:57.507715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-27 00:53:57.507720 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.507724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-27 00:53:57.507728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-27 00:53:57.507732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-27 00:53:57.507736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-27 00:53:57.507740 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.507748 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.507752 | orchestrator | 2026-03-27 00:53:57.507756 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-27 00:53:57.507760 | orchestrator | Friday 27 March 2026 00:50:00 +0000 (0:00:01.039) 0:02:17.798 ********** 2026-03-27 00:53:57.507765 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.507770 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.507775 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.507780 | orchestrator | 2026-03-27 00:53:57.507784 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-27 00:53:57.507789 | orchestrator | Friday 27 March 2026 00:50:01 +0000 (0:00:01.344) 0:02:19.142 ********** 2026-03-27 00:53:57.507794 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.507798 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.507803 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.507807 | orchestrator | 2026-03-27 00:53:57.507812 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-27 00:53:57.507816 | orchestrator | Friday 27 March 2026 00:50:03 +0000 (0:00:01.944) 0:02:21.087 ********** 2026-03-27 00:53:57.507821 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.507826 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.507830 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.507835 | orchestrator | 2026-03-27 00:53:57.507839 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-27 00:53:57.507844 | orchestrator | Friday 27 March 2026 00:50:04 +0000 (0:00:00.356) 0:02:21.443 ********** 2026-03-27 00:53:57.507849 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.507853 | orchestrator | 2026-03-27 00:53:57.507857 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-27 00:53:57.507884 | orchestrator | Friday 27 March 2026 00:50:05 +0000 (0:00:01.446) 0:02:22.890 ********** 2026-03-27 00:53:57.507899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:53:57.507910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:53:57.508058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:53:57.508069 | orchestrator | 2026-03-27 00:53:57.508074 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-27 00:53:57.508078 | orchestrator | Friday 27 March 2026 00:50:11 +0000 (0:00:06.214) 0:02:29.105 ********** 2026-03-27 00:53:57.508086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-27 00:53:57.508092 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.508100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-27 00:53:57.508108 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.508119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-27 00:53:57.508125 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.508130 | orchestrator | 2026-03-27 00:53:57.508135 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-27 00:53:57.508140 | orchestrator | Friday 27 March 2026 00:50:12 +0000 (0:00:00.741) 0:02:29.846 ********** 2026-03-27 00:53:57.508145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-27 00:53:57.508151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-27 00:53:57.508159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-27 00:53:57.508166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-27 00:53:57.508170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-27 00:53:57.508176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-27 00:53:57.508180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-27 00:53:57.508186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-27 00:53:57.508190 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.508194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-27 00:53:57.508199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-27 00:53:57.508203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-27 00:53:57.508207 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.508213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-27 00:53:57.508220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-27 00:53:57.508225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-27 00:53:57.508232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-27 00:53:57.508236 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.508240 | orchestrator | 2026-03-27 00:53:57.508244 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-27 00:53:57.508248 | orchestrator | Friday 27 March 2026 00:50:14 +0000 (0:00:01.638) 0:02:31.485 ********** 2026-03-27 00:53:57.508252 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.508256 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.508260 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.508264 | orchestrator | 2026-03-27 00:53:57.508268 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-27 00:53:57.508272 | orchestrator | Friday 27 March 2026 00:50:15 +0000 (0:00:01.685) 0:02:33.170 ********** 2026-03-27 00:53:57.508276 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.508280 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.508284 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.508288 | orchestrator | 2026-03-27 00:53:57.508293 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-27 00:53:57.508331 | orchestrator | Friday 27 March 2026 00:50:17 +0000 (0:00:01.917) 0:02:35.087 ********** 2026-03-27 00:53:57.508336 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.508355 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.508360 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.508364 | orchestrator | 2026-03-27 00:53:57.508368 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-27 00:53:57.508372 | orchestrator | Friday 27 March 2026 00:50:18 +0000 (0:00:00.282) 0:02:35.370 ********** 2026-03-27 00:53:57.508376 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.508380 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.508384 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.508388 | orchestrator | 2026-03-27 00:53:57.508393 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-27 00:53:57.508397 | orchestrator | Friday 27 March 2026 00:50:18 +0000 (0:00:00.279) 0:02:35.649 ********** 2026-03-27 00:53:57.508401 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.508405 | orchestrator | 2026-03-27 00:53:57.508409 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-27 00:53:57.508413 | orchestrator | Friday 27 March 2026 00:50:19 +0000 (0:00:01.189) 0:02:36.838 ********** 2026-03-27 00:53:57.508417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:53:57.508425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:53:57.508437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:53:57.508442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:53:57.508446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:53:57.508451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:53:57.508455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:53:57.508473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:53:57.508481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:53:57.508488 | orchestrator | 2026-03-27 00:53:57.508494 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-27 00:53:57.508500 | orchestrator | Friday 27 March 2026 00:50:22 +0000 (0:00:03.010) 0:02:39.849 ********** 2026-03-27 00:53:57.508506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:53:57.508513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:53:57.508520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:53:57.508531 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.508543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:53:57.508551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:53:57.508557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:53:57.508564 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.508571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:53:57.508578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:53:57.508592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:53:57.508598 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.508604 | orchestrator | 2026-03-27 00:53:57.508611 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-27 00:53:57.508621 | orchestrator | Friday 27 March 2026 00:50:23 +0000 (0:00:00.568) 0:02:40.418 ********** 2026-03-27 00:53:57.508631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-27 00:53:57.508638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-27 00:53:57.508645 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.508652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-27 00:53:57.508658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-27 00:53:57.508665 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.508672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-27 00:53:57.508678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-27 00:53:57.508685 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.508691 | orchestrator | 2026-03-27 00:53:57.508698 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-27 00:53:57.508704 | orchestrator | Friday 27 March 2026 00:50:24 +0000 (0:00:00.869) 0:02:41.288 ********** 2026-03-27 00:53:57.508711 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.508717 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.508725 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.508731 | orchestrator | 2026-03-27 00:53:57.508737 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-27 00:53:57.508744 | orchestrator | Friday 27 March 2026 00:50:25 +0000 (0:00:01.323) 0:02:42.611 ********** 2026-03-27 00:53:57.508750 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.508758 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.508762 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.508773 | orchestrator | 2026-03-27 00:53:57.508777 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-27 00:53:57.508781 | orchestrator | Friday 27 March 2026 00:50:27 +0000 (0:00:02.129) 0:02:44.741 ********** 2026-03-27 00:53:57.508785 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.508789 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.508793 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.508797 | orchestrator | 2026-03-27 00:53:57.508801 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-27 00:53:57.508805 | orchestrator | Friday 27 March 2026 00:50:27 +0000 (0:00:00.341) 0:02:45.082 ********** 2026-03-27 00:53:57.508809 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.508813 | orchestrator | 2026-03-27 00:53:57.508817 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-27 00:53:57.508821 | orchestrator | Friday 27 March 2026 00:50:29 +0000 (0:00:01.207) 0:02:46.289 ********** 2026-03-27 00:53:57.508826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 00:53:57.509178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 00:53:57.509203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 00:53:57.509223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509226 | orchestrator | 2026-03-27 00:53:57.509230 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-27 00:53:57.509234 | orchestrator | Friday 27 March 2026 00:50:32 +0000 (0:00:03.313) 0:02:49.603 ********** 2026-03-27 00:53:57.509246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 00:53:57.509250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509254 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.509262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 00:53:57.509266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509270 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.509276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 00:53:57.509282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509286 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.509290 | orchestrator | 2026-03-27 00:53:57.509294 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-27 00:53:57.509298 | orchestrator | Friday 27 March 2026 00:50:33 +0000 (0:00:00.653) 0:02:50.256 ********** 2026-03-27 00:53:57.509302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-27 00:53:57.509307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-27 00:53:57.509314 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.509318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-27 00:53:57.509322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-27 00:53:57.509326 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.509330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-27 00:53:57.509333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-27 00:53:57.509337 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.509341 | orchestrator | 2026-03-27 00:53:57.509345 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-27 00:53:57.509348 | orchestrator | Friday 27 March 2026 00:50:34 +0000 (0:00:00.967) 0:02:51.224 ********** 2026-03-27 00:53:57.509352 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.509356 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.509359 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.509363 | orchestrator | 2026-03-27 00:53:57.509367 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-27 00:53:57.509370 | orchestrator | Friday 27 March 2026 00:50:35 +0000 (0:00:01.509) 0:02:52.734 ********** 2026-03-27 00:53:57.509374 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.509378 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.509381 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.509385 | orchestrator | 2026-03-27 00:53:57.509389 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-27 00:53:57.509393 | orchestrator | Friday 27 March 2026 00:50:37 +0000 (0:00:01.910) 0:02:54.645 ********** 2026-03-27 00:53:57.509396 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.509400 | orchestrator | 2026-03-27 00:53:57.509404 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-27 00:53:57.509407 | orchestrator | Friday 27 March 2026 00:50:38 +0000 (0:00:00.926) 0:02:55.571 ********** 2026-03-27 00:53:57.509411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-27 00:53:57.509421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-27 00:53:57.509441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-27 00:53:57.509460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509476 | orchestrator | 2026-03-27 00:53:57.509480 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-27 00:53:57.509483 | orchestrator | Friday 27 March 2026 00:50:41 +0000 (0:00:03.327) 0:02:58.899 ********** 2026-03-27 00:53:57.509490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-27 00:53:57.509499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509510 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.509514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-27 00:53:57.509518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509539 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.509543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-27 00:53:57.509547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.509559 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.509562 | orchestrator | 2026-03-27 00:53:57.509566 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-27 00:53:57.509570 | orchestrator | Friday 27 March 2026 00:50:42 +0000 (0:00:00.592) 0:02:59.491 ********** 2026-03-27 00:53:57.509574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-27 00:53:57.509578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-27 00:53:57.509584 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.509588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-27 00:53:57.509594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-27 00:53:57.509598 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.509604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-27 00:53:57.509607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-27 00:53:57.509611 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.509615 | orchestrator | 2026-03-27 00:53:57.509619 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-27 00:53:57.509622 | orchestrator | Friday 27 March 2026 00:50:43 +0000 (0:00:00.779) 0:03:00.271 ********** 2026-03-27 00:53:57.509626 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.509630 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.509634 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.509637 | orchestrator | 2026-03-27 00:53:57.509641 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-27 00:53:57.509645 | orchestrator | Friday 27 March 2026 00:50:44 +0000 (0:00:01.275) 0:03:01.546 ********** 2026-03-27 00:53:57.509648 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.509652 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.509656 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.509659 | orchestrator | 2026-03-27 00:53:57.509663 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-27 00:53:57.509667 | orchestrator | Friday 27 March 2026 00:50:46 +0000 (0:00:02.053) 0:03:03.600 ********** 2026-03-27 00:53:57.509671 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.509674 | orchestrator | 2026-03-27 00:53:57.509678 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-27 00:53:57.509699 | orchestrator | Friday 27 March 2026 00:50:47 +0000 (0:00:01.150) 0:03:04.751 ********** 2026-03-27 00:53:57.509704 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-27 00:53:57.509707 | orchestrator | 2026-03-27 00:53:57.509711 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-27 00:53:57.509716 | orchestrator | Friday 27 March 2026 00:50:50 +0000 (0:00:03.206) 0:03:07.958 ********** 2026-03-27 00:53:57.509723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:53:57.510655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-27 00:53:57.510678 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.510689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:53:57.510693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-27 00:53:57.510704 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.510714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:53:57.510719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-27 00:53:57.510723 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.510727 | orchestrator | 2026-03-27 00:53:57.510731 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-27 00:53:57.510735 | orchestrator | Friday 27 March 2026 00:50:53 +0000 (0:00:02.279) 0:03:10.237 ********** 2026-03-27 00:53:57.510757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:53:57.510764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-27 00:53:57.510768 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.510778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:53:57.510782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-27 00:53:57.510786 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.510790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:53:57.510809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-27 00:53:57.510813 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.510817 | orchestrator | 2026-03-27 00:53:57.510821 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-27 00:53:57.510825 | orchestrator | Friday 27 March 2026 00:50:55 +0000 (0:00:02.605) 0:03:12.843 ********** 2026-03-27 00:53:57.510829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-27 00:53:57.510834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-27 00:53:57.510838 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.510841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-27 00:53:57.510849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-27 00:53:57.510852 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.510856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-27 00:53:57.510885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-27 00:53:57.510889 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.510893 | orchestrator | 2026-03-27 00:53:57.510896 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-27 00:53:57.510900 | orchestrator | Friday 27 March 2026 00:50:57 +0000 (0:00:02.244) 0:03:15.087 ********** 2026-03-27 00:53:57.510907 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.510910 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.510914 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.510918 | orchestrator | 2026-03-27 00:53:57.510922 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-27 00:53:57.510926 | orchestrator | Friday 27 March 2026 00:50:59 +0000 (0:00:01.837) 0:03:16.925 ********** 2026-03-27 00:53:57.510929 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.510933 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.510937 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.510941 | orchestrator | 2026-03-27 00:53:57.510944 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-27 00:53:57.510948 | orchestrator | Friday 27 March 2026 00:51:01 +0000 (0:00:01.544) 0:03:18.470 ********** 2026-03-27 00:53:57.510952 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.510956 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.510959 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.510963 | orchestrator | 2026-03-27 00:53:57.510967 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-27 00:53:57.510971 | orchestrator | Friday 27 March 2026 00:51:01 +0000 (0:00:00.275) 0:03:18.745 ********** 2026-03-27 00:53:57.510974 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.510981 | orchestrator | 2026-03-27 00:53:57.510985 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-27 00:53:57.510989 | orchestrator | Friday 27 March 2026 00:51:02 +0000 (0:00:01.158) 0:03:19.904 ********** 2026-03-27 00:53:57.510993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-27 00:53:57.510998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-27 00:53:57.511002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-27 00:53:57.511006 | orchestrator | 2026-03-27 00:53:57.511010 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-27 00:53:57.511014 | orchestrator | Friday 27 March 2026 00:51:04 +0000 (0:00:01.446) 0:03:21.350 ********** 2026-03-27 00:53:57.511026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-27 00:53:57.511030 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.511034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-27 00:53:57.511042 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.511046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-27 00:53:57.511050 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.511053 | orchestrator | 2026-03-27 00:53:57.511057 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-27 00:53:57.511061 | orchestrator | Friday 27 March 2026 00:51:04 +0000 (0:00:00.399) 0:03:21.750 ********** 2026-03-27 00:53:57.511065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-27 00:53:57.511069 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.511073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-27 00:53:57.511118 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.511123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-27 00:53:57.511127 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.511131 | orchestrator | 2026-03-27 00:53:57.511134 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-27 00:53:57.511138 | orchestrator | Friday 27 March 2026 00:51:05 +0000 (0:00:00.969) 0:03:22.720 ********** 2026-03-27 00:53:57.511142 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.511145 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.511149 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.511153 | orchestrator | 2026-03-27 00:53:57.511156 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-27 00:53:57.511160 | orchestrator | Friday 27 March 2026 00:51:05 +0000 (0:00:00.352) 0:03:23.073 ********** 2026-03-27 00:53:57.511164 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.511168 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.511171 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.511175 | orchestrator | 2026-03-27 00:53:57.511179 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-27 00:53:57.511182 | orchestrator | Friday 27 March 2026 00:51:07 +0000 (0:00:01.116) 0:03:24.189 ********** 2026-03-27 00:53:57.511186 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.511190 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.511194 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.511249 | orchestrator | 2026-03-27 00:53:57.511253 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-27 00:53:57.511259 | orchestrator | Friday 27 March 2026 00:51:07 +0000 (0:00:00.275) 0:03:24.465 ********** 2026-03-27 00:53:57.511263 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.511267 | orchestrator | 2026-03-27 00:53:57.511271 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-27 00:53:57.511277 | orchestrator | Friday 27 March 2026 00:51:08 +0000 (0:00:01.342) 0:03:25.807 ********** 2026-03-27 00:53:57.511282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 00:53:57.511287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.511293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.511297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 00:53:57.511308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 00:53:57.511316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.511320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.511325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.511329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-27 00:53:57.512162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-27 00:53:57.512310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-27 00:53:57.512317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.512397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.512414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.512423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.512487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.512502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.512510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.512516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.512539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.512551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.512558 | orchestrator | 2026-03-27 00:53:57.512565 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-27 00:53:57.512573 | orchestrator | Friday 27 March 2026 00:51:12 +0000 (0:00:04.090) 0:03:29.898 ********** 2026-03-27 00:53:57.512584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 00:53:57.512593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 00:53:57.512600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.512650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-27 00:53:57.513014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-27 00:53:57.513069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.513136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 00:53:57.513142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.513179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-27 00:53:57.513206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.513232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.513250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513258 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.513263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.513267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.513277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.513286 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.513295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-27 00:53:57.513374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.513381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-27 00:53:57.513445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-27 00:53:57.513453 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.513457 | orchestrator | 2026-03-27 00:53:57.513461 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-27 00:53:57.513470 | orchestrator | Friday 27 March 2026 00:51:14 +0000 (0:00:02.177) 0:03:32.075 ********** 2026-03-27 00:53:57.513477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-27 00:53:57.513484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-27 00:53:57.513496 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.513503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-27 00:53:57.513510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-27 00:53:57.513516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-27 00:53:57.513568 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.513579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-27 00:53:57.513587 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.513593 | orchestrator | 2026-03-27 00:53:57.513600 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-27 00:53:57.513606 | orchestrator | Friday 27 March 2026 00:51:16 +0000 (0:00:01.571) 0:03:33.646 ********** 2026-03-27 00:53:57.513612 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.513618 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.513624 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.513631 | orchestrator | 2026-03-27 00:53:57.513637 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-27 00:53:57.513644 | orchestrator | Friday 27 March 2026 00:51:17 +0000 (0:00:01.402) 0:03:35.048 ********** 2026-03-27 00:53:57.513651 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.513658 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.513665 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.513671 | orchestrator | 2026-03-27 00:53:57.513678 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-27 00:53:57.513685 | orchestrator | Friday 27 March 2026 00:51:20 +0000 (0:00:02.211) 0:03:37.259 ********** 2026-03-27 00:53:57.513691 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.513697 | orchestrator | 2026-03-27 00:53:57.513704 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-27 00:53:57.513710 | orchestrator | Friday 27 March 2026 00:51:21 +0000 (0:00:01.271) 0:03:38.531 ********** 2026-03-27 00:53:57.513718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.513737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.513754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.513762 | orchestrator | 2026-03-27 00:53:57.513768 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-27 00:53:57.513773 | orchestrator | Friday 27 March 2026 00:51:24 +0000 (0:00:02.917) 0:03:41.448 ********** 2026-03-27 00:53:57.513777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.513781 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.513786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.513792 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.514367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.514429 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.514435 | orchestrator | 2026-03-27 00:53:57.514440 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-27 00:53:57.514450 | orchestrator | Friday 27 March 2026 00:51:24 +0000 (0:00:00.448) 0:03:41.896 ********** 2026-03-27 00:53:57.514455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514465 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.514469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514477 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.514481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514488 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.514492 | orchestrator | 2026-03-27 00:53:57.514496 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-27 00:53:57.514500 | orchestrator | Friday 27 March 2026 00:51:25 +0000 (0:00:00.837) 0:03:42.734 ********** 2026-03-27 00:53:57.514503 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.514507 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.514511 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.514514 | orchestrator | 2026-03-27 00:53:57.514518 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-27 00:53:57.514522 | orchestrator | Friday 27 March 2026 00:51:26 +0000 (0:00:01.259) 0:03:43.993 ********** 2026-03-27 00:53:57.514525 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.514529 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.514533 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.514537 | orchestrator | 2026-03-27 00:53:57.514541 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-27 00:53:57.514544 | orchestrator | Friday 27 March 2026 00:51:28 +0000 (0:00:02.180) 0:03:46.174 ********** 2026-03-27 00:53:57.514548 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.514552 | orchestrator | 2026-03-27 00:53:57.514556 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-27 00:53:57.514559 | orchestrator | Friday 27 March 2026 00:51:30 +0000 (0:00:01.220) 0:03:47.394 ********** 2026-03-27 00:53:57.514565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.514586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.514600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.514622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514630 | orchestrator | 2026-03-27 00:53:57.514635 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-27 00:53:57.514639 | orchestrator | Friday 27 March 2026 00:51:34 +0000 (0:00:04.595) 0:03:51.989 ********** 2026-03-27 00:53:57.514643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.514653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514666 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.514670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.514675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514686 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.514690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.514699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.514712 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.514718 | orchestrator | 2026-03-27 00:53:57.514724 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-27 00:53:57.514833 | orchestrator | Friday 27 March 2026 00:51:35 +0000 (0:00:00.660) 0:03:52.650 ********** 2026-03-27 00:53:57.514934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514971 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.514976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-27 00:53:57.514994 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.514998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-27 00:53:57.515003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-27 00:53:57.515007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-27 00:53:57.515012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-27 00:53:57.515016 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.515020 | orchestrator | 2026-03-27 00:53:57.515024 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-27 00:53:57.515034 | orchestrator | Friday 27 March 2026 00:51:36 +0000 (0:00:00.926) 0:03:53.577 ********** 2026-03-27 00:53:57.515038 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.515042 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.515046 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.515050 | orchestrator | 2026-03-27 00:53:57.515056 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-27 00:53:57.515060 | orchestrator | Friday 27 March 2026 00:51:38 +0000 (0:00:01.918) 0:03:55.496 ********** 2026-03-27 00:53:57.515064 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.515068 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.515072 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.515076 | orchestrator | 2026-03-27 00:53:57.515079 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-27 00:53:57.515083 | orchestrator | Friday 27 March 2026 00:51:40 +0000 (0:00:02.330) 0:03:57.826 ********** 2026-03-27 00:53:57.515087 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.515091 | orchestrator | 2026-03-27 00:53:57.515095 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-27 00:53:57.515099 | orchestrator | Friday 27 March 2026 00:51:41 +0000 (0:00:01.296) 0:03:59.123 ********** 2026-03-27 00:53:57.515103 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-27 00:53:57.515112 | orchestrator | 2026-03-27 00:53:57.515116 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-27 00:53:57.515120 | orchestrator | Friday 27 March 2026 00:51:43 +0000 (0:00:01.596) 0:04:00.719 ********** 2026-03-27 00:53:57.515125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-27 00:53:57.515130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-27 00:53:57.515134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-27 00:53:57.515138 | orchestrator | 2026-03-27 00:53:57.515142 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-27 00:53:57.515147 | orchestrator | Friday 27 March 2026 00:51:47 +0000 (0:00:04.044) 0:04:04.764 ********** 2026-03-27 00:53:57.515151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-27 00:53:57.515155 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.515159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-27 00:53:57.515164 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.515175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-27 00:53:57.515180 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.515184 | orchestrator | 2026-03-27 00:53:57.515188 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-27 00:53:57.515196 | orchestrator | Friday 27 March 2026 00:51:48 +0000 (0:00:01.285) 0:04:06.050 ********** 2026-03-27 00:53:57.515200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-27 00:53:57.515204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-27 00:53:57.515209 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.515213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-27 00:53:57.515218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-27 00:53:57.515222 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.515226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-27 00:53:57.515230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-27 00:53:57.515234 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.515238 | orchestrator | 2026-03-27 00:53:57.515243 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-27 00:53:57.515247 | orchestrator | Friday 27 March 2026 00:51:50 +0000 (0:00:01.709) 0:04:07.759 ********** 2026-03-27 00:53:57.515251 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.515255 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.515259 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.515263 | orchestrator | 2026-03-27 00:53:57.515267 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-27 00:53:57.515271 | orchestrator | Friday 27 March 2026 00:51:52 +0000 (0:00:02.318) 0:04:10.078 ********** 2026-03-27 00:53:57.515275 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.515279 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.515283 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.515287 | orchestrator | 2026-03-27 00:53:57.515291 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-27 00:53:57.515295 | orchestrator | Friday 27 March 2026 00:51:56 +0000 (0:00:03.371) 0:04:13.449 ********** 2026-03-27 00:53:57.515299 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-27 00:53:57.515304 | orchestrator | 2026-03-27 00:53:57.515308 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-27 00:53:57.515312 | orchestrator | Friday 27 March 2026 00:51:57 +0000 (0:00:00.876) 0:04:14.326 ********** 2026-03-27 00:53:57.515316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-27 00:53:57.515323 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.515334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-27 00:53:57.515339 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.515343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-27 00:53:57.515348 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.515352 | orchestrator | 2026-03-27 00:53:57.515355 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-27 00:53:57.515360 | orchestrator | Friday 27 March 2026 00:51:58 +0000 (0:00:01.593) 0:04:15.919 ********** 2026-03-27 00:53:57.515364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-27 00:53:57.515368 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.515372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-27 00:53:57.515377 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.515381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-27 00:53:57.515385 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.515389 | orchestrator | 2026-03-27 00:53:57.515574 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-27 00:53:57.515581 | orchestrator | Friday 27 March 2026 00:52:00 +0000 (0:00:01.583) 0:04:17.502 ********** 2026-03-27 00:53:57.515584 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.515588 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.515592 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.515596 | orchestrator | 2026-03-27 00:53:57.515599 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-27 00:53:57.515608 | orchestrator | Friday 27 March 2026 00:52:01 +0000 (0:00:01.343) 0:04:18.846 ********** 2026-03-27 00:53:57.515612 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.515617 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.515621 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.515625 | orchestrator | 2026-03-27 00:53:57.515629 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-27 00:53:57.515632 | orchestrator | Friday 27 March 2026 00:52:04 +0000 (0:00:02.504) 0:04:21.351 ********** 2026-03-27 00:53:57.515636 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.515640 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.515644 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.515647 | orchestrator | 2026-03-27 00:53:57.515651 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-27 00:53:57.515655 | orchestrator | Friday 27 March 2026 00:52:07 +0000 (0:00:03.238) 0:04:24.589 ********** 2026-03-27 00:53:57.515659 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-27 00:53:57.515663 | orchestrator | 2026-03-27 00:53:57.515666 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-27 00:53:57.515670 | orchestrator | Friday 27 March 2026 00:52:08 +0000 (0:00:00.842) 0:04:25.432 ********** 2026-03-27 00:53:57.515678 | orchestrator | skipping: [testbed-node-0]2026-03-27 00:53:57 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:53:57.515686 | orchestrator | => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-27 00:53:57.515691 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.515695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-27 00:53:57.515699 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.515702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-27 00:53:57.515706 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.515710 | orchestrator | 2026-03-27 00:53:57.515714 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-27 00:53:57.515717 | orchestrator | Friday 27 March 2026 00:52:09 +0000 (0:00:01.451) 0:04:26.884 ********** 2026-03-27 00:53:57.515721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-27 00:53:57.515729 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.515733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-27 00:53:57.515737 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.515741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-27 00:53:57.515745 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.515748 | orchestrator | 2026-03-27 00:53:57.515752 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-27 00:53:57.515756 | orchestrator | Friday 27 March 2026 00:52:11 +0000 (0:00:01.391) 0:04:28.276 ********** 2026-03-27 00:53:57.515760 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.515763 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.515767 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.515771 | orchestrator | 2026-03-27 00:53:57.515775 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-27 00:53:57.515781 | orchestrator | Friday 27 March 2026 00:52:12 +0000 (0:00:01.556) 0:04:29.832 ********** 2026-03-27 00:53:57.515785 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.515789 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.515792 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.515796 | orchestrator | 2026-03-27 00:53:57.515800 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-27 00:53:57.515813 | orchestrator | Friday 27 March 2026 00:52:15 +0000 (0:00:02.826) 0:04:32.659 ********** 2026-03-27 00:53:57.515817 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.515821 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.515824 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.515828 | orchestrator | 2026-03-27 00:53:57.515832 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-27 00:53:57.515836 | orchestrator | Friday 27 March 2026 00:52:18 +0000 (0:00:03.200) 0:04:35.860 ********** 2026-03-27 00:53:57.515839 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.515843 | orchestrator | 2026-03-27 00:53:57.515847 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-27 00:53:57.515851 | orchestrator | Friday 27 March 2026 00:52:20 +0000 (0:00:01.354) 0:04:37.214 ********** 2026-03-27 00:53:57.515855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.515956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 00:53:57.515967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.515977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.516043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.516049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 00:53:57.516109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.516118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 00:53:57.516146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.516152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.516175 | orchestrator | 2026-03-27 00:53:57.516181 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-27 00:53:57.516188 | orchestrator | Friday 27 March 2026 00:52:23 +0000 (0:00:03.761) 0:04:40.975 ********** 2026-03-27 00:53:57.516195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.516202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 00:53:57.516212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.516229 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.516234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.516239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 00:53:57.516243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.516267 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.516271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.516276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 00:53:57.516280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 00:53:57.516293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 00:53:57.516297 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.516302 | orchestrator | 2026-03-27 00:53:57.516306 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-27 00:53:57.516312 | orchestrator | Friday 27 March 2026 00:52:24 +0000 (0:00:00.950) 0:04:41.926 ********** 2026-03-27 00:53:57.516321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-27 00:53:57.516326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-27 00:53:57.516330 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.516335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-27 00:53:57.516339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-27 00:53:57.516344 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.516348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-27 00:53:57.516353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-27 00:53:57.516357 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.516362 | orchestrator | 2026-03-27 00:53:57.516366 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-27 00:53:57.516370 | orchestrator | Friday 27 March 2026 00:52:25 +0000 (0:00:00.804) 0:04:42.731 ********** 2026-03-27 00:53:57.516375 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.516379 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.516383 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.516387 | orchestrator | 2026-03-27 00:53:57.516392 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-27 00:53:57.516396 | orchestrator | Friday 27 March 2026 00:52:27 +0000 (0:00:01.598) 0:04:44.330 ********** 2026-03-27 00:53:57.516400 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.516405 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.516409 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.516413 | orchestrator | 2026-03-27 00:53:57.516418 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-27 00:53:57.516422 | orchestrator | Friday 27 March 2026 00:52:29 +0000 (0:00:02.381) 0:04:46.711 ********** 2026-03-27 00:53:57.516427 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.516431 | orchestrator | 2026-03-27 00:53:57.516435 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-27 00:53:57.516440 | orchestrator | Friday 27 March 2026 00:52:31 +0000 (0:00:01.780) 0:04:48.492 ********** 2026-03-27 00:53:57.516479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:53:57.516497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:53:57.516966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:53:57.516989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:53:57.516996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:53:57.517008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:53:57.517020 | orchestrator | 2026-03-27 00:53:57.517028 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-27 00:53:57.517032 | orchestrator | Friday 27 March 2026 00:52:36 +0000 (0:00:05.332) 0:04:53.824 ********** 2026-03-27 00:53:57.517036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-27 00:53:57.517040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-27 00:53:57.517045 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.517049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-27 00:53:57.517057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-27 00:53:57.517064 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.517071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-27 00:53:57.517075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-27 00:53:57.517079 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.517083 | orchestrator | 2026-03-27 00:53:57.517087 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-27 00:53:57.517090 | orchestrator | Friday 27 March 2026 00:52:37 +0000 (0:00:00.999) 0:04:54.823 ********** 2026-03-27 00:53:57.517095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-27 00:53:57.517099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-27 00:53:57.517104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-27 00:53:57.517112 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.517116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-27 00:53:57.517120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-27 00:53:57.517124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-27 00:53:57.517128 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.517132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-27 00:53:57.517136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-27 00:53:57.517145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-27 00:53:57.517149 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.517153 | orchestrator | 2026-03-27 00:53:57.517157 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-27 00:53:57.517161 | orchestrator | Friday 27 March 2026 00:52:38 +0000 (0:00:01.321) 0:04:56.145 ********** 2026-03-27 00:53:57.517164 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.517168 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.517172 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.517175 | orchestrator | 2026-03-27 00:53:57.517179 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-27 00:53:57.517183 | orchestrator | Friday 27 March 2026 00:52:39 +0000 (0:00:00.483) 0:04:56.628 ********** 2026-03-27 00:53:57.517187 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.517191 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.517194 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.517198 | orchestrator | 2026-03-27 00:53:57.517202 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-27 00:53:57.517205 | orchestrator | Friday 27 March 2026 00:52:40 +0000 (0:00:01.459) 0:04:58.088 ********** 2026-03-27 00:53:57.517209 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.517213 | orchestrator | 2026-03-27 00:53:57.517217 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-27 00:53:57.517220 | orchestrator | Friday 27 March 2026 00:52:42 +0000 (0:00:01.834) 0:04:59.922 ********** 2026-03-27 00:53:57.517225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-27 00:53:57.517234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 00:53:57.517239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-27 00:53:57.517261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 00:53:57.517265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-27 00:53:57.517287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 00:53:57.517295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-27 00:53:57.517314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-27 00:53:57.517319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-27 00:53:57.517344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-27 00:53:57.517348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-27 00:53:57.517369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-27 00:53:57.517378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517390 | orchestrator | 2026-03-27 00:53:57.517394 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-27 00:53:57.517398 | orchestrator | Friday 27 March 2026 00:52:47 +0000 (0:00:04.343) 0:05:04.266 ********** 2026-03-27 00:53:57.517545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-27 00:53:57.517554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 00:53:57.517558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-27 00:53:57.517582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-27 00:53:57.517587 | orchestrator | skipping: [testbed-node-2026-03-27 00:53:57 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:53:57.517593 | orchestrator | 2026-03-27 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:53:57.517598 | orchestrator | 0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-27 00:53:57.517610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 00:53:57.517618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517622 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.517627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-27 00:53:57.517656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-27 00:53:57.517660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-27 00:53:57.517664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 00:53:57.517673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517722 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.517727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-27 00:53:57.517748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-27 00:53:57.517764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 00:53:57.517776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 00:53:57.517782 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.517788 | orchestrator | 2026-03-27 00:53:57.517794 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-27 00:53:57.517800 | orchestrator | Friday 27 March 2026 00:52:47 +0000 (0:00:00.826) 0:05:05.092 ********** 2026-03-27 00:53:57.517806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-27 00:53:57.517814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-27 00:53:57.517821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-27 00:53:57.517828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-27 00:53:57.517835 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.517841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-27 00:53:57.517847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-27 00:53:57.517853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-27 00:53:57.517945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-27 00:53:57.517953 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.517964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-27 00:53:57.517971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-27 00:53:57.517975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-27 00:53:57.517979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-27 00:53:57.517983 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.517987 | orchestrator | 2026-03-27 00:53:57.517991 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-27 00:53:57.517995 | orchestrator | Friday 27 March 2026 00:52:49 +0000 (0:00:01.921) 0:05:07.013 ********** 2026-03-27 00:53:57.517998 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518002 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518006 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518010 | orchestrator | 2026-03-27 00:53:57.518045 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-27 00:53:57.518049 | orchestrator | Friday 27 March 2026 00:52:50 +0000 (0:00:00.498) 0:05:07.512 ********** 2026-03-27 00:53:57.518053 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518057 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518061 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518064 | orchestrator | 2026-03-27 00:53:57.518068 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-27 00:53:57.518072 | orchestrator | Friday 27 March 2026 00:52:51 +0000 (0:00:01.515) 0:05:09.027 ********** 2026-03-27 00:53:57.518076 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.518080 | orchestrator | 2026-03-27 00:53:57.518084 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-27 00:53:57.518088 | orchestrator | Friday 27 March 2026 00:52:53 +0000 (0:00:01.966) 0:05:10.993 ********** 2026-03-27 00:53:57.518092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:53:57.518106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:53:57.518114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-27 00:53:57.518119 | orchestrator | 2026-03-27 00:53:57.518122 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-27 00:53:57.518126 | orchestrator | Friday 27 March 2026 00:52:56 +0000 (0:00:02.331) 0:05:13.325 ********** 2026-03-27 00:53:57.518130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-27 00:53:57.518134 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-27 00:53:57.518149 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-27 00:53:57.518317 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518322 | orchestrator | 2026-03-27 00:53:57.518326 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-27 00:53:57.518334 | orchestrator | Friday 27 March 2026 00:52:56 +0000 (0:00:00.441) 0:05:13.766 ********** 2026-03-27 00:53:57.518339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-27 00:53:57.518343 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-27 00:53:57.518352 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-27 00:53:57.518361 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518365 | orchestrator | 2026-03-27 00:53:57.518369 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-27 00:53:57.518373 | orchestrator | Friday 27 March 2026 00:52:57 +0000 (0:00:00.650) 0:05:14.417 ********** 2026-03-27 00:53:57.518377 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518382 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518386 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518391 | orchestrator | 2026-03-27 00:53:57.518395 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-27 00:53:57.518399 | orchestrator | Friday 27 March 2026 00:52:58 +0000 (0:00:00.932) 0:05:15.349 ********** 2026-03-27 00:53:57.518403 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518407 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518412 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518416 | orchestrator | 2026-03-27 00:53:57.518420 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-27 00:53:57.518424 | orchestrator | Friday 27 March 2026 00:52:59 +0000 (0:00:01.376) 0:05:16.726 ********** 2026-03-27 00:53:57.518429 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:53:57.518433 | orchestrator | 2026-03-27 00:53:57.518437 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-27 00:53:57.518441 | orchestrator | Friday 27 March 2026 00:53:01 +0000 (0:00:01.487) 0:05:18.214 ********** 2026-03-27 00:53:57.518451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.518456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.518469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.518474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.518479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.518487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-27 00:53:57.518491 | orchestrator | 2026-03-27 00:53:57.518496 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-27 00:53:57.518500 | orchestrator | Friday 27 March 2026 00:53:08 +0000 (0:00:07.253) 0:05:25.468 ********** 2026-03-27 00:53:57.518511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.518516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.518520 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.518532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.518538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.518542 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-27 00:53:57.518554 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518558 | orchestrator | 2026-03-27 00:53:57.518561 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-27 00:53:57.518565 | orchestrator | Friday 27 March 2026 00:53:08 +0000 (0:00:00.659) 0:05:26.127 ********** 2026-03-27 00:53:57.518569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518588 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518618 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-27 00:53:57.518651 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518658 | orchestrator | 2026-03-27 00:53:57.518665 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-27 00:53:57.518675 | orchestrator | Friday 27 March 2026 00:53:09 +0000 (0:00:00.917) 0:05:27.045 ********** 2026-03-27 00:53:57.518681 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.518685 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.518688 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.518692 | orchestrator | 2026-03-27 00:53:57.518696 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-27 00:53:57.518700 | orchestrator | Friday 27 March 2026 00:53:11 +0000 (0:00:01.359) 0:05:28.405 ********** 2026-03-27 00:53:57.518703 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.518707 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.518711 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.518715 | orchestrator | 2026-03-27 00:53:57.518719 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-27 00:53:57.518727 | orchestrator | Friday 27 March 2026 00:53:13 +0000 (0:00:02.122) 0:05:30.528 ********** 2026-03-27 00:53:57.518731 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518735 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518739 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518742 | orchestrator | 2026-03-27 00:53:57.518746 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-27 00:53:57.518750 | orchestrator | Friday 27 March 2026 00:53:13 +0000 (0:00:00.532) 0:05:31.060 ********** 2026-03-27 00:53:57.518755 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518761 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518767 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518774 | orchestrator | 2026-03-27 00:53:57.518779 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-27 00:53:57.518845 | orchestrator | Friday 27 March 2026 00:53:14 +0000 (0:00:00.288) 0:05:31.348 ********** 2026-03-27 00:53:57.518883 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518890 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518897 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518903 | orchestrator | 2026-03-27 00:53:57.518909 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-27 00:53:57.518916 | orchestrator | Friday 27 March 2026 00:53:14 +0000 (0:00:00.307) 0:05:31.656 ********** 2026-03-27 00:53:57.518922 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518928 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518933 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518939 | orchestrator | 2026-03-27 00:53:57.518946 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-27 00:53:57.518951 | orchestrator | Friday 27 March 2026 00:53:14 +0000 (0:00:00.259) 0:05:31.916 ********** 2026-03-27 00:53:57.518957 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518963 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518967 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518971 | orchestrator | 2026-03-27 00:53:57.518975 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-27 00:53:57.518978 | orchestrator | Friday 27 March 2026 00:53:15 +0000 (0:00:00.519) 0:05:32.435 ********** 2026-03-27 00:53:57.518982 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.518986 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.518990 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.518994 | orchestrator | 2026-03-27 00:53:57.518997 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-27 00:53:57.519003 | orchestrator | Friday 27 March 2026 00:53:15 +0000 (0:00:00.481) 0:05:32.917 ********** 2026-03-27 00:53:57.519009 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.519019 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.519026 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.519032 | orchestrator | 2026-03-27 00:53:57.519037 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-27 00:53:57.519043 | orchestrator | Friday 27 March 2026 00:53:16 +0000 (0:00:00.697) 0:05:33.614 ********** 2026-03-27 00:53:57.519049 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.519055 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.519061 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.519066 | orchestrator | 2026-03-27 00:53:57.519072 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-27 00:53:57.519078 | orchestrator | Friday 27 March 2026 00:53:16 +0000 (0:00:00.554) 0:05:34.168 ********** 2026-03-27 00:53:57.519085 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.519090 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.519096 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.519101 | orchestrator | 2026-03-27 00:53:57.519108 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-27 00:53:57.519114 | orchestrator | Friday 27 March 2026 00:53:17 +0000 (0:00:00.919) 0:05:35.088 ********** 2026-03-27 00:53:57.519127 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.519133 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.519139 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.519142 | orchestrator | 2026-03-27 00:53:57.519146 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-27 00:53:57.519150 | orchestrator | Friday 27 March 2026 00:53:18 +0000 (0:00:00.898) 0:05:35.986 ********** 2026-03-27 00:53:57.519154 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.519158 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.519162 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.519165 | orchestrator | 2026-03-27 00:53:57.519169 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-27 00:53:57.519173 | orchestrator | Friday 27 March 2026 00:53:19 +0000 (0:00:00.952) 0:05:36.938 ********** 2026-03-27 00:53:57.519177 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.519181 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.519185 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.519189 | orchestrator | 2026-03-27 00:53:57.519192 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-27 00:53:57.519196 | orchestrator | Friday 27 March 2026 00:53:24 +0000 (0:00:04.782) 0:05:41.721 ********** 2026-03-27 00:53:57.519200 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.519206 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.519212 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.519220 | orchestrator | 2026-03-27 00:53:57.519229 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-27 00:53:57.519244 | orchestrator | Friday 27 March 2026 00:53:26 +0000 (0:00:01.749) 0:05:43.470 ********** 2026-03-27 00:53:57.519251 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.519257 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.519262 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.519267 | orchestrator | 2026-03-27 00:53:57.519273 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-27 00:53:57.519284 | orchestrator | Friday 27 March 2026 00:53:39 +0000 (0:00:13.328) 0:05:56.799 ********** 2026-03-27 00:53:57.519290 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.519296 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.519301 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.519307 | orchestrator | 2026-03-27 00:53:57.519312 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-27 00:53:57.519317 | orchestrator | Friday 27 March 2026 00:53:40 +0000 (0:00:00.819) 0:05:57.618 ********** 2026-03-27 00:53:57.519323 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:53:57.519328 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:53:57.519334 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:53:57.519339 | orchestrator | 2026-03-27 00:53:57.519345 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-27 00:53:57.519351 | orchestrator | Friday 27 March 2026 00:53:49 +0000 (0:00:09.305) 0:06:06.924 ********** 2026-03-27 00:53:57.519356 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.519362 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.519368 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.519374 | orchestrator | 2026-03-27 00:53:57.519380 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-27 00:53:57.519385 | orchestrator | Friday 27 March 2026 00:53:50 +0000 (0:00:00.837) 0:06:07.761 ********** 2026-03-27 00:53:57.519392 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.519398 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.519405 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.519411 | orchestrator | 2026-03-27 00:53:57.519415 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-27 00:53:57.519418 | orchestrator | Friday 27 March 2026 00:53:50 +0000 (0:00:00.352) 0:06:08.113 ********** 2026-03-27 00:53:57.519422 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.519431 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.519435 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.519439 | orchestrator | 2026-03-27 00:53:57.519442 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-27 00:53:57.519446 | orchestrator | Friday 27 March 2026 00:53:51 +0000 (0:00:00.365) 0:06:08.479 ********** 2026-03-27 00:53:57.519450 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.519454 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.519458 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.519461 | orchestrator | 2026-03-27 00:53:57.519465 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-27 00:53:57.519469 | orchestrator | Friday 27 March 2026 00:53:51 +0000 (0:00:00.357) 0:06:08.836 ********** 2026-03-27 00:53:57.519473 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.519477 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.519481 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.519484 | orchestrator | 2026-03-27 00:53:57.519488 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-27 00:53:57.519492 | orchestrator | Friday 27 March 2026 00:53:52 +0000 (0:00:00.795) 0:06:09.632 ********** 2026-03-27 00:53:57.519495 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:53:57.519499 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:53:57.519503 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:53:57.519507 | orchestrator | 2026-03-27 00:53:57.519511 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-27 00:53:57.519514 | orchestrator | Friday 27 March 2026 00:53:52 +0000 (0:00:00.371) 0:06:10.004 ********** 2026-03-27 00:53:57.519518 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.519522 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.519526 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.519530 | orchestrator | 2026-03-27 00:53:57.519533 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-27 00:53:57.519537 | orchestrator | Friday 27 March 2026 00:53:53 +0000 (0:00:01.048) 0:06:11.053 ********** 2026-03-27 00:53:57.519541 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:53:57.519545 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:53:57.519549 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:53:57.519553 | orchestrator | 2026-03-27 00:53:57.519557 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:53:57.519561 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-27 00:53:57.519566 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-27 00:53:57.519572 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-27 00:53:57.519578 | orchestrator | 2026-03-27 00:53:57.519584 | orchestrator | 2026-03-27 00:53:57.519590 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:53:57.519595 | orchestrator | Friday 27 March 2026 00:53:54 +0000 (0:00:00.877) 0:06:11.931 ********** 2026-03-27 00:53:57.519601 | orchestrator | =============================================================================== 2026-03-27 00:53:57.519606 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.33s 2026-03-27 00:53:57.519612 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.31s 2026-03-27 00:53:57.519617 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.25s 2026-03-27 00:53:57.519623 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 6.21s 2026-03-27 00:53:57.519628 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.33s 2026-03-27 00:53:57.519640 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.19s 2026-03-27 00:53:57.519652 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.78s 2026-03-27 00:53:57.519658 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.60s 2026-03-27 00:53:57.519668 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.41s 2026-03-27 00:53:57.519674 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.34s 2026-03-27 00:53:57.519680 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.33s 2026-03-27 00:53:57.519686 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.26s 2026-03-27 00:53:57.519692 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.16s 2026-03-27 00:53:57.519697 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.09s 2026-03-27 00:53:57.519703 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.09s 2026-03-27 00:53:57.519709 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.04s 2026-03-27 00:53:57.519715 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.76s 2026-03-27 00:53:57.519721 | orchestrator | loadbalancer : Check loadbalancer containers ---------------------------- 3.66s 2026-03-27 00:53:57.519727 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.65s 2026-03-27 00:53:57.519733 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.63s 2026-03-27 00:54:00.561930 | orchestrator | 2026-03-27 00:54:00 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:00.563014 | orchestrator | 2026-03-27 00:54:00 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:00.563935 | orchestrator | 2026-03-27 00:54:00 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:00.564275 | orchestrator | 2026-03-27 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:03.612676 | orchestrator | 2026-03-27 00:54:03 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:03.613170 | orchestrator | 2026-03-27 00:54:03 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:03.613455 | orchestrator | 2026-03-27 00:54:03 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:03.613620 | orchestrator | 2026-03-27 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:06.639381 | orchestrator | 2026-03-27 00:54:06 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:06.639842 | orchestrator | 2026-03-27 00:54:06 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:06.640663 | orchestrator | 2026-03-27 00:54:06 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:06.640686 | orchestrator | 2026-03-27 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:09.673423 | orchestrator | 2026-03-27 00:54:09 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:09.673724 | orchestrator | 2026-03-27 00:54:09 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:09.676727 | orchestrator | 2026-03-27 00:54:09 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:09.676793 | orchestrator | 2026-03-27 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:12.747523 | orchestrator | 2026-03-27 00:54:12 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:12.750514 | orchestrator | 2026-03-27 00:54:12 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:12.752487 | orchestrator | 2026-03-27 00:54:12 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:12.752621 | orchestrator | 2026-03-27 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:15.790718 | orchestrator | 2026-03-27 00:54:15 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:15.790790 | orchestrator | 2026-03-27 00:54:15 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:15.791584 | orchestrator | 2026-03-27 00:54:15 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:15.791614 | orchestrator | 2026-03-27 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:18.824770 | orchestrator | 2026-03-27 00:54:18 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:18.828710 | orchestrator | 2026-03-27 00:54:18 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:18.829399 | orchestrator | 2026-03-27 00:54:18 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:18.829444 | orchestrator | 2026-03-27 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:21.864169 | orchestrator | 2026-03-27 00:54:21 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:21.864597 | orchestrator | 2026-03-27 00:54:21 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:21.865744 | orchestrator | 2026-03-27 00:54:21 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:21.865795 | orchestrator | 2026-03-27 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:24.892647 | orchestrator | 2026-03-27 00:54:24 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:24.893464 | orchestrator | 2026-03-27 00:54:24 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:24.894229 | orchestrator | 2026-03-27 00:54:24 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:24.894414 | orchestrator | 2026-03-27 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:27.926777 | orchestrator | 2026-03-27 00:54:27 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:27.927677 | orchestrator | 2026-03-27 00:54:27 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:27.928614 | orchestrator | 2026-03-27 00:54:27 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:27.929182 | orchestrator | 2026-03-27 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:30.965841 | orchestrator | 2026-03-27 00:54:30 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:30.966226 | orchestrator | 2026-03-27 00:54:30 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:30.967414 | orchestrator | 2026-03-27 00:54:30 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:30.967442 | orchestrator | 2026-03-27 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:33.996584 | orchestrator | 2026-03-27 00:54:33 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:33.998694 | orchestrator | 2026-03-27 00:54:33 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:34.000033 | orchestrator | 2026-03-27 00:54:33 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:34.000126 | orchestrator | 2026-03-27 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:37.065988 | orchestrator | 2026-03-27 00:54:37 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:37.068020 | orchestrator | 2026-03-27 00:54:37 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:37.070009 | orchestrator | 2026-03-27 00:54:37 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:37.070068 | orchestrator | 2026-03-27 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:40.106625 | orchestrator | 2026-03-27 00:54:40 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:40.108704 | orchestrator | 2026-03-27 00:54:40 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:40.110619 | orchestrator | 2026-03-27 00:54:40 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:40.110663 | orchestrator | 2026-03-27 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:43.156866 | orchestrator | 2026-03-27 00:54:43 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:43.158915 | orchestrator | 2026-03-27 00:54:43 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:43.160600 | orchestrator | 2026-03-27 00:54:43 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:43.160950 | orchestrator | 2026-03-27 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:46.212096 | orchestrator | 2026-03-27 00:54:46 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:46.213090 | orchestrator | 2026-03-27 00:54:46 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:46.214292 | orchestrator | 2026-03-27 00:54:46 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:46.215428 | orchestrator | 2026-03-27 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:49.264788 | orchestrator | 2026-03-27 00:54:49 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:49.266277 | orchestrator | 2026-03-27 00:54:49 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:49.267928 | orchestrator | 2026-03-27 00:54:49 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:49.268300 | orchestrator | 2026-03-27 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:52.319187 | orchestrator | 2026-03-27 00:54:52 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:52.323158 | orchestrator | 2026-03-27 00:54:52 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:52.324756 | orchestrator | 2026-03-27 00:54:52 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:52.325219 | orchestrator | 2026-03-27 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:55.362160 | orchestrator | 2026-03-27 00:54:55 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:55.364405 | orchestrator | 2026-03-27 00:54:55 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:55.366421 | orchestrator | 2026-03-27 00:54:55 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:55.366474 | orchestrator | 2026-03-27 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:54:58.416386 | orchestrator | 2026-03-27 00:54:58 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:54:58.418237 | orchestrator | 2026-03-27 00:54:58 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:54:58.421171 | orchestrator | 2026-03-27 00:54:58 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:54:58.421274 | orchestrator | 2026-03-27 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:01.461988 | orchestrator | 2026-03-27 00:55:01 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:01.462367 | orchestrator | 2026-03-27 00:55:01 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:01.464589 | orchestrator | 2026-03-27 00:55:01 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:01.464632 | orchestrator | 2026-03-27 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:04.516457 | orchestrator | 2026-03-27 00:55:04 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:04.517801 | orchestrator | 2026-03-27 00:55:04 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:04.519303 | orchestrator | 2026-03-27 00:55:04 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:04.519586 | orchestrator | 2026-03-27 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:07.567442 | orchestrator | 2026-03-27 00:55:07 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:07.567497 | orchestrator | 2026-03-27 00:55:07 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:07.568555 | orchestrator | 2026-03-27 00:55:07 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:07.568653 | orchestrator | 2026-03-27 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:10.600512 | orchestrator | 2026-03-27 00:55:10 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:10.603725 | orchestrator | 2026-03-27 00:55:10 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:10.603774 | orchestrator | 2026-03-27 00:55:10 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:10.603780 | orchestrator | 2026-03-27 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:13.645493 | orchestrator | 2026-03-27 00:55:13 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:13.647809 | orchestrator | 2026-03-27 00:55:13 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:13.649602 | orchestrator | 2026-03-27 00:55:13 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:13.649945 | orchestrator | 2026-03-27 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:16.723390 | orchestrator | 2026-03-27 00:55:16 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:16.726506 | orchestrator | 2026-03-27 00:55:16 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:16.729470 | orchestrator | 2026-03-27 00:55:16 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:16.729729 | orchestrator | 2026-03-27 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:19.792705 | orchestrator | 2026-03-27 00:55:19 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:19.792779 | orchestrator | 2026-03-27 00:55:19 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:19.793657 | orchestrator | 2026-03-27 00:55:19 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:19.793683 | orchestrator | 2026-03-27 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:22.845432 | orchestrator | 2026-03-27 00:55:22 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:22.846817 | orchestrator | 2026-03-27 00:55:22 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:22.849031 | orchestrator | 2026-03-27 00:55:22 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:22.849115 | orchestrator | 2026-03-27 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:25.878506 | orchestrator | 2026-03-27 00:55:25 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:25.878976 | orchestrator | 2026-03-27 00:55:25 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:25.880486 | orchestrator | 2026-03-27 00:55:25 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:25.880518 | orchestrator | 2026-03-27 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:28.908804 | orchestrator | 2026-03-27 00:55:28 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:28.910362 | orchestrator | 2026-03-27 00:55:28 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:28.913029 | orchestrator | 2026-03-27 00:55:28 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:28.913327 | orchestrator | 2026-03-27 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:32.017184 | orchestrator | 2026-03-27 00:55:32 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:32.018790 | orchestrator | 2026-03-27 00:55:32 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:32.022176 | orchestrator | 2026-03-27 00:55:32 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:32.022231 | orchestrator | 2026-03-27 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:35.078269 | orchestrator | 2026-03-27 00:55:35 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:35.081440 | orchestrator | 2026-03-27 00:55:35 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:35.085367 | orchestrator | 2026-03-27 00:55:35 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:35.085448 | orchestrator | 2026-03-27 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:38.140035 | orchestrator | 2026-03-27 00:55:38 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:38.142066 | orchestrator | 2026-03-27 00:55:38 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:38.144805 | orchestrator | 2026-03-27 00:55:38 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:38.144869 | orchestrator | 2026-03-27 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:41.195334 | orchestrator | 2026-03-27 00:55:41 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:41.197047 | orchestrator | 2026-03-27 00:55:41 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:41.202135 | orchestrator | 2026-03-27 00:55:41 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:41.202255 | orchestrator | 2026-03-27 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:44.257432 | orchestrator | 2026-03-27 00:55:44 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:44.258938 | orchestrator | 2026-03-27 00:55:44 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:44.260967 | orchestrator | 2026-03-27 00:55:44 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:44.261179 | orchestrator | 2026-03-27 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:47.324003 | orchestrator | 2026-03-27 00:55:47 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:47.326500 | orchestrator | 2026-03-27 00:55:47 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:47.328656 | orchestrator | 2026-03-27 00:55:47 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:47.328726 | orchestrator | 2026-03-27 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:50.372133 | orchestrator | 2026-03-27 00:55:50 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:50.373449 | orchestrator | 2026-03-27 00:55:50 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:50.374602 | orchestrator | 2026-03-27 00:55:50 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:50.374979 | orchestrator | 2026-03-27 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:53.431153 | orchestrator | 2026-03-27 00:55:53 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:53.436952 | orchestrator | 2026-03-27 00:55:53 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:53.440136 | orchestrator | 2026-03-27 00:55:53 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:53.440189 | orchestrator | 2026-03-27 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:56.485803 | orchestrator | 2026-03-27 00:55:56 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:56.488730 | orchestrator | 2026-03-27 00:55:56 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:56.490938 | orchestrator | 2026-03-27 00:55:56 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:56.490986 | orchestrator | 2026-03-27 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:55:59.544467 | orchestrator | 2026-03-27 00:55:59 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:55:59.547924 | orchestrator | 2026-03-27 00:55:59 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:55:59.550180 | orchestrator | 2026-03-27 00:55:59 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state STARTED 2026-03-27 00:55:59.550478 | orchestrator | 2026-03-27 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:02.593287 | orchestrator | 2026-03-27 00:56:02 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:56:02.594590 | orchestrator | 2026-03-27 00:56:02 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:02.596458 | orchestrator | 2026-03-27 00:56:02 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:02.602576 | orchestrator | 2026-03-27 00:56:02 | INFO  | Task 0098ce22-482c-4da1-ab99-521ed6f104ab is in state SUCCESS 2026-03-27 00:56:02.604374 | orchestrator | 2026-03-27 00:56:02.604435 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-27 00:56:02.604445 | orchestrator | 2.16.14 2026-03-27 00:56:02.604453 | orchestrator | 2026-03-27 00:56:02.604460 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-27 00:56:02.604468 | orchestrator | 2026-03-27 00:56:02.604475 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-27 00:56:02.604482 | orchestrator | Friday 27 March 2026 00:45:31 +0000 (0:00:00.867) 0:00:00.867 ********** 2026-03-27 00:56:02.604515 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.604525 | orchestrator | 2026-03-27 00:56:02.604532 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-27 00:56:02.604557 | orchestrator | Friday 27 March 2026 00:45:33 +0000 (0:00:01.412) 0:00:02.279 ********** 2026-03-27 00:56:02.604565 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.604573 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.604587 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.604595 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.604601 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.604606 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.604610 | orchestrator | 2026-03-27 00:56:02.604640 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-27 00:56:02.604646 | orchestrator | Friday 27 March 2026 00:45:35 +0000 (0:00:02.172) 0:00:04.452 ********** 2026-03-27 00:56:02.604651 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.604655 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.604660 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.604664 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.604669 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.604673 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.604678 | orchestrator | 2026-03-27 00:56:02.604682 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-27 00:56:02.604687 | orchestrator | Friday 27 March 2026 00:45:36 +0000 (0:00:00.676) 0:00:05.129 ********** 2026-03-27 00:56:02.604692 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.604696 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.604701 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.604705 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.604709 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.604714 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.604718 | orchestrator | 2026-03-27 00:56:02.604723 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-27 00:56:02.604727 | orchestrator | Friday 27 March 2026 00:45:37 +0000 (0:00:01.243) 0:00:06.372 ********** 2026-03-27 00:56:02.604732 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.604737 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.604741 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.604745 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.604750 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.604754 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.604759 | orchestrator | 2026-03-27 00:56:02.604763 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-27 00:56:02.604768 | orchestrator | Friday 27 March 2026 00:45:38 +0000 (0:00:01.027) 0:00:07.400 ********** 2026-03-27 00:56:02.604812 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.604818 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.604822 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.604838 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.604843 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.604847 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.604852 | orchestrator | 2026-03-27 00:56:02.604857 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-27 00:56:02.604861 | orchestrator | Friday 27 March 2026 00:45:39 +0000 (0:00:00.991) 0:00:08.391 ********** 2026-03-27 00:56:02.604866 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.604870 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.604875 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.604897 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.604903 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.604907 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.604911 | orchestrator | 2026-03-27 00:56:02.604916 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-27 00:56:02.604921 | orchestrator | Friday 27 March 2026 00:45:40 +0000 (0:00:01.326) 0:00:09.718 ********** 2026-03-27 00:56:02.604925 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.604930 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.604947 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.604952 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.604957 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.604961 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.604966 | orchestrator | 2026-03-27 00:56:02.604971 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-27 00:56:02.604975 | orchestrator | Friday 27 March 2026 00:45:41 +0000 (0:00:00.895) 0:00:10.614 ********** 2026-03-27 00:56:02.604981 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.604986 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.604991 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.604996 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.605002 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.605007 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.605012 | orchestrator | 2026-03-27 00:56:02.605018 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-27 00:56:02.605023 | orchestrator | Friday 27 March 2026 00:45:43 +0000 (0:00:01.720) 0:00:12.334 ********** 2026-03-27 00:56:02.605028 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-27 00:56:02.605034 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:56:02.605039 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:56:02.605044 | orchestrator | 2026-03-27 00:56:02.605050 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-27 00:56:02.605056 | orchestrator | Friday 27 March 2026 00:45:44 +0000 (0:00:01.018) 0:00:13.352 ********** 2026-03-27 00:56:02.605061 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.605067 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.605071 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.605085 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.605090 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.605094 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.605099 | orchestrator | 2026-03-27 00:56:02.605103 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-27 00:56:02.605108 | orchestrator | Friday 27 March 2026 00:45:46 +0000 (0:00:02.283) 0:00:15.635 ********** 2026-03-27 00:56:02.605113 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-27 00:56:02.605117 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:56:02.605122 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:56:02.605126 | orchestrator | 2026-03-27 00:56:02.605130 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-27 00:56:02.605135 | orchestrator | Friday 27 March 2026 00:45:49 +0000 (0:00:03.130) 0:00:18.766 ********** 2026-03-27 00:56:02.605144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-27 00:56:02.605149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-27 00:56:02.605153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-27 00:56:02.605161 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605166 | orchestrator | 2026-03-27 00:56:02.605170 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-27 00:56:02.605207 | orchestrator | Friday 27 March 2026 00:45:50 +0000 (0:00:00.658) 0:00:19.425 ********** 2026-03-27 00:56:02.605213 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.605219 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.605224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.605229 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605233 | orchestrator | 2026-03-27 00:56:02.605238 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-27 00:56:02.605242 | orchestrator | Friday 27 March 2026 00:45:52 +0000 (0:00:01.811) 0:00:21.236 ********** 2026-03-27 00:56:02.605248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.605253 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.605258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.605263 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605268 | orchestrator | 2026-03-27 00:56:02.605272 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-27 00:56:02.605291 | orchestrator | Friday 27 March 2026 00:45:52 +0000 (0:00:00.351) 0:00:21.588 ********** 2026-03-27 00:56:02.605302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-27 00:45:47.691027', 'end': '2026-03-27 00:45:47.802262', 'delta': '0:00:00.111235', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.605318 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-27 00:45:48.365885', 'end': '2026-03-27 00:45:48.460783', 'delta': '0:00:00.094898', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.605326 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-27 00:45:49.499991', 'end': '2026-03-27 00:45:49.590165', 'delta': '0:00:00.090174', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.605331 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605336 | orchestrator | 2026-03-27 00:56:02.605344 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-27 00:56:02.605380 | orchestrator | Friday 27 March 2026 00:45:54 +0000 (0:00:01.517) 0:00:23.106 ********** 2026-03-27 00:56:02.605392 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.605430 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.605437 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.605441 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.605446 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.605451 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.605461 | orchestrator | 2026-03-27 00:56:02.605472 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-27 00:56:02.605478 | orchestrator | Friday 27 March 2026 00:45:56 +0000 (0:00:02.404) 0:00:25.511 ********** 2026-03-27 00:56:02.605485 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.605491 | orchestrator | 2026-03-27 00:56:02.605499 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-27 00:56:02.605554 | orchestrator | Friday 27 March 2026 00:45:57 +0000 (0:00:00.686) 0:00:26.198 ********** 2026-03-27 00:56:02.605565 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605573 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.605580 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.605588 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.605596 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.605603 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.605611 | orchestrator | 2026-03-27 00:56:02.605645 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-27 00:56:02.605671 | orchestrator | Friday 27 March 2026 00:45:58 +0000 (0:00:01.075) 0:00:27.273 ********** 2026-03-27 00:56:02.605677 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605682 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.605686 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.605691 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.605696 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.605700 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.605706 | orchestrator | 2026-03-27 00:56:02.605716 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-27 00:56:02.605758 | orchestrator | Friday 27 March 2026 00:45:59 +0000 (0:00:01.233) 0:00:28.506 ********** 2026-03-27 00:56:02.605768 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605797 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.605807 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.605815 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.605823 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.605831 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.605839 | orchestrator | 2026-03-27 00:56:02.605848 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-27 00:56:02.605856 | orchestrator | Friday 27 March 2026 00:46:00 +0000 (0:00:00.957) 0:00:29.464 ********** 2026-03-27 00:56:02.605865 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605873 | orchestrator | 2026-03-27 00:56:02.605901 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-27 00:56:02.605910 | orchestrator | Friday 27 March 2026 00:46:00 +0000 (0:00:00.247) 0:00:29.711 ********** 2026-03-27 00:56:02.605951 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605959 | orchestrator | 2026-03-27 00:56:02.605967 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-27 00:56:02.605975 | orchestrator | Friday 27 March 2026 00:46:00 +0000 (0:00:00.222) 0:00:29.934 ********** 2026-03-27 00:56:02.605983 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.605990 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.605998 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.606775 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.606793 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.606802 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.606811 | orchestrator | 2026-03-27 00:56:02.607251 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-27 00:56:02.607263 | orchestrator | Friday 27 March 2026 00:46:01 +0000 (0:00:00.997) 0:00:30.931 ********** 2026-03-27 00:56:02.607271 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.607280 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.607288 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.607296 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.607304 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.607312 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.607320 | orchestrator | 2026-03-27 00:56:02.607329 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-27 00:56:02.607368 | orchestrator | Friday 27 March 2026 00:46:03 +0000 (0:00:01.360) 0:00:32.292 ********** 2026-03-27 00:56:02.607375 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.607382 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.607390 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.607398 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.607406 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.607414 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.607469 | orchestrator | 2026-03-27 00:56:02.607483 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-27 00:56:02.607492 | orchestrator | Friday 27 March 2026 00:46:04 +0000 (0:00:00.731) 0:00:33.025 ********** 2026-03-27 00:56:02.607500 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.607508 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.607517 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.607525 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.607533 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.607542 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.607550 | orchestrator | 2026-03-27 00:56:02.607785 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-27 00:56:02.607796 | orchestrator | Friday 27 March 2026 00:46:05 +0000 (0:00:01.453) 0:00:34.478 ********** 2026-03-27 00:56:02.607804 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.607827 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.607836 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.607843 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.607851 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.607858 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.607866 | orchestrator | 2026-03-27 00:56:02.607874 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-27 00:56:02.607967 | orchestrator | Friday 27 March 2026 00:46:06 +0000 (0:00:01.053) 0:00:35.531 ********** 2026-03-27 00:56:02.607977 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.607985 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.607992 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.608000 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.608008 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.608016 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.608023 | orchestrator | 2026-03-27 00:56:02.608031 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-27 00:56:02.608040 | orchestrator | Friday 27 March 2026 00:46:08 +0000 (0:00:01.972) 0:00:37.503 ********** 2026-03-27 00:56:02.608047 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.608055 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.608063 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.608070 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.608077 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.608085 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.608092 | orchestrator | 2026-03-27 00:56:02.608100 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-27 00:56:02.608108 | orchestrator | Friday 27 March 2026 00:46:09 +0000 (0:00:00.616) 0:00:38.120 ********** 2026-03-27 00:56:02.608118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--49c52ee7--6668--5cd2--bd86--f7267953750e-osd--block--49c52ee7--6668--5cd2--bd86--f7267953750e', 'dm-uuid-LVM-aIeYERUPfSMgKvMUlrUvkdFoiC095wYqmQHJrrTn0jpmHxteM5p3holeBEU1wK52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2cf1a901--b2f7--5490--8423--90f944953f5f-osd--block--2cf1a901--b2f7--5490--8423--90f944953f5f', 'dm-uuid-LVM-oG5nRXfwiEfIyT67me8tDDkp9qe9PZl6uJWjGDHETnsMXx2yJFE6R8tp2wcCvLG6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part1', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part14', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part15', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part16', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.608602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--49c52ee7--6668--5cd2--bd86--f7267953750e-osd--block--49c52ee7--6668--5cd2--bd86--f7267953750e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c27DoD-5Xms-HWce-cCFK-RGwJ-OB5L-Wp0aUE', 'scsi-0QEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab', 'scsi-SQEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.608611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f-osd--block--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f', 'dm-uuid-LVM-kXpxmk7mM7gsT0IEG34nSngbkTZpbdXRxkZWqd06KQroWKMJAdY7IUK7KXlT0a4X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2cf1a901--b2f7--5490--8423--90f944953f5f-osd--block--2cf1a901--b2f7--5490--8423--90f944953f5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AdpT4M-V1ru-ryF1-yUmX-ps46-3mDd-YCPCY0', 'scsi-0QEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d', 'scsi-SQEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.608627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--627e7bc4--4e7d--5af1--903b--8d115676372d-osd--block--627e7bc4--4e7d--5af1--903b--8d115676372d', 'dm-uuid-LVM-tAGTKeLAL1CuimTCxNRF6S7vcoFbSB1IG207gSsYVP7XHnbeEilqW2dICrCpUzDt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.608668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26', 'scsi-SQEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bb6fbf97--7198--5485--83ee--7be3b389ad62-osd--block--bb6fbf97--7198--5485--83ee--7be3b389ad62', 'dm-uuid-LVM-CjqIlvHeAtR3JbQk0BgFBJxu6DMSkyeQ6Z2BmWlBw0epF9HWYfyR2g1Gee0Y0aRK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331-osd--block--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331', 'dm-uuid-LVM-J1Nq2ec7Gmy9QQADR5bdDjMg13S83C0ff6IWfn1j1PGxmlgMcc6TFvgvCYtuSrhX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609136 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.609208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part1', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part14', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part15', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part16', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bb6fbf97--7198--5485--83ee--7be3b389ad62-osd--block--bb6fbf97--7198--5485--83ee--7be3b389ad62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0CT0cf-3Djh-G5bQ-hgkl-4qDa-J3jY-vD1h3S', 'scsi-0QEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c', 'scsi-SQEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part1', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part14', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part15', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part16', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331-osd--block--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iTweHW-LY1Y-g2UM-sheT-2IyK-2y3c-bkBoq2', 'scsi-0QEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617', 'scsi-SQEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f-osd--block--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7mpUVm-WSwP-nQK5-a7bw-t1xe-hN5n-Diz1dd', 'scsi-0QEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231', 'scsi-SQEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984', 'scsi-SQEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--627e7bc4--4e7d--5af1--903b--8d115676372d-osd--block--627e7bc4--4e7d--5af1--903b--8d115676372d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yJHRP5-hv0d-FXuF-M4Vj-N3MC-oEik-gGt0x7', 'scsi-0QEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022', 'scsi-SQEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part1', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part14', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part15', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part16', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef', 'scsi-SQEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part1', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part14', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part15', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part16', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.609900 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.609906 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.609913 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.609919 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.609937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.609963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.610044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.610055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.610065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.610071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:56:02.610078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.610145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:56:02.610155 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.610161 | orchestrator | 2026-03-27 00:56:02.610168 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-27 00:56:02.610174 | orchestrator | Friday 27 March 2026 00:46:10 +0000 (0:00:01.569) 0:00:39.690 ********** 2026-03-27 00:56:02.610185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--49c52ee7--6668--5cd2--bd86--f7267953750e-osd--block--49c52ee7--6668--5cd2--bd86--f7267953750e', 'dm-uuid-LVM-aIeYERUPfSMgKvMUlrUvkdFoiC095wYqmQHJrrTn0jpmHxteM5p3holeBEU1wK52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2cf1a901--b2f7--5490--8423--90f944953f5f-osd--block--2cf1a901--b2f7--5490--8423--90f944953f5f', 'dm-uuid-LVM-oG5nRXfwiEfIyT67me8tDDkp9qe9PZl6uJWjGDHETnsMXx2yJFE6R8tp2wcCvLG6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610309 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part1', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part14', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part15', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part16', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--49c52ee7--6668--5cd2--bd86--f7267953750e-osd--block--49c52ee7--6668--5cd2--bd86--f7267953750e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c27DoD-5Xms-HWce-cCFK-RGwJ-OB5L-Wp0aUE', 'scsi-0QEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab', 'scsi-SQEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2cf1a901--b2f7--5490--8423--90f944953f5f-osd--block--2cf1a901--b2f7--5490--8423--90f944953f5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AdpT4M-V1ru-ryF1-yUmX-ps46-3mDd-YCPCY0', 'scsi-0QEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d', 'scsi-SQEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26', 'scsi-SQEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610502 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f-osd--block--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f', 'dm-uuid-LVM-kXpxmk7mM7gsT0IEG34nSngbkTZpbdXRxkZWqd06KQroWKMJAdY7IUK7KXlT0a4X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--627e7bc4--4e7d--5af1--903b--8d115676372d-osd--block--627e7bc4--4e7d--5af1--903b--8d115676372d', 'dm-uuid-LVM-tAGTKeLAL1CuimTCxNRF6S7vcoFbSB1IG207gSsYVP7XHnbeEilqW2dICrCpUzDt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610533 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610546 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.610552 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610598 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610616 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bb6fbf97--7198--5485--83ee--7be3b389ad62-osd--block--bb6fbf97--7198--5485--83ee--7be3b389ad62', 'dm-uuid-LVM-CjqIlvHeAtR3JbQk0BgFBJxu6DMSkyeQ6Z2BmWlBw0epF9HWYfyR2g1Gee0Y0aRK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610633 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610639 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331-osd--block--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331', 'dm-uuid-LVM-J1Nq2ec7Gmy9QQADR5bdDjMg13S83C0ff6IWfn1j1PGxmlgMcc6TFvgvCYtuSrhX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610684 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610700 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part1', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part14', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part15', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part16', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f-osd--block--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7mpUVm-WSwP-nQK5-a7bw-t1xe-hN5n-Diz1dd', 'scsi-0QEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231', 'scsi-SQEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610791 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610798 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610823 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610893 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610905 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610917 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610937 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.610989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part1', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part14', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part15', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part16', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611004 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611016 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611023 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611029 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bb6fbf97--7198--5485--83ee--7be3b389ad62-osd--block--bb6fbf97--7198--5485--83ee--7be3b389ad62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0CT0cf-3Djh-G5bQ-hgkl-4qDa-J3jY-vD1h3S', 'scsi-0QEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c', 'scsi-SQEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611088 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611103 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331-osd--block--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iTweHW-LY1Y-g2UM-sheT-2IyK-2y3c-bkBoq2', 'scsi-0QEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617', 'scsi-SQEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611115 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part1', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part14', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part15', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part16', 'scsi-SQEMU_QEMU_HARDDISK_40e5dcb4-672b-4763-9a98-56119e00a3ac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611162 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611175 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984', 'scsi-SQEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611187 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--627e7bc4--4e7d--5af1--903b--8d115676372d-osd--block--627e7bc4--4e7d--5af1--903b--8d115676372d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yJHRP5-hv0d-FXuF-M4Vj-N3MC-oEik-gGt0x7', 'scsi-0QEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022', 'scsi-SQEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611194 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef', 'scsi-SQEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611200 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.611207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611273 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611285 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611292 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611298 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611306 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611311 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611425 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611441 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611449 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.611453 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.611458 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part1', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part14', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part15', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part16', 'scsi-SQEMU_QEMU_HARDDISK_a468aa16-2d5a-4768-ab18-db6a6ccef41a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611495 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611501 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.611505 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611514 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611518 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611522 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611526 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611531 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611560 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611571 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611576 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f18c829-b3cd-4f22-b402-72c3edab461d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611580 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:56:02.611584 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.611588 | orchestrator | 2026-03-27 00:56:02.611621 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-27 00:56:02.611627 | orchestrator | Friday 27 March 2026 00:46:13 +0000 (0:00:02.615) 0:00:42.305 ********** 2026-03-27 00:56:02.611632 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.611636 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.611640 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.611650 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.611654 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.611658 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.611662 | orchestrator | 2026-03-27 00:56:02.611666 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-27 00:56:02.611670 | orchestrator | Friday 27 March 2026 00:46:15 +0000 (0:00:02.224) 0:00:44.529 ********** 2026-03-27 00:56:02.611673 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.611677 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.611681 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.611684 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.611688 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.611692 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.611695 | orchestrator | 2026-03-27 00:56:02.611699 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-27 00:56:02.611703 | orchestrator | Friday 27 March 2026 00:46:16 +0000 (0:00:00.745) 0:00:45.274 ********** 2026-03-27 00:56:02.611707 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.611713 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.611717 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.611720 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.611724 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.611728 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.611732 | orchestrator | 2026-03-27 00:56:02.611735 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-27 00:56:02.611739 | orchestrator | Friday 27 March 2026 00:46:17 +0000 (0:00:01.387) 0:00:46.661 ********** 2026-03-27 00:56:02.611743 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.611747 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.611751 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.611755 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.611758 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.611762 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.611766 | orchestrator | 2026-03-27 00:56:02.611769 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-27 00:56:02.611773 | orchestrator | Friday 27 March 2026 00:46:18 +0000 (0:00:01.065) 0:00:47.727 ********** 2026-03-27 00:56:02.611777 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.611781 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.611784 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.611788 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.611792 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.611796 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.611799 | orchestrator | 2026-03-27 00:56:02.611803 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-27 00:56:02.611807 | orchestrator | Friday 27 March 2026 00:46:19 +0000 (0:00:00.817) 0:00:48.545 ********** 2026-03-27 00:56:02.611811 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.611815 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.611818 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.611822 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.611825 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.611829 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.611833 | orchestrator | 2026-03-27 00:56:02.611837 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-27 00:56:02.611840 | orchestrator | Friday 27 March 2026 00:46:20 +0000 (0:00:00.664) 0:00:49.209 ********** 2026-03-27 00:56:02.611844 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-27 00:56:02.611851 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-27 00:56:02.611855 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-27 00:56:02.611859 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-27 00:56:02.611862 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-27 00:56:02.611866 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-27 00:56:02.611875 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-27 00:56:02.611909 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-27 00:56:02.611914 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-27 00:56:02.611917 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-27 00:56:02.611921 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-27 00:56:02.611925 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-27 00:56:02.611928 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-27 00:56:02.611932 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-27 00:56:02.611936 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-27 00:56:02.611940 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-27 00:56:02.611943 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-27 00:56:02.611947 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-27 00:56:02.611951 | orchestrator | 2026-03-27 00:56:02.611955 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-27 00:56:02.611959 | orchestrator | Friday 27 March 2026 00:46:23 +0000 (0:00:03.680) 0:00:52.889 ********** 2026-03-27 00:56:02.611962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-27 00:56:02.611966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-27 00:56:02.611970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-27 00:56:02.611974 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.611978 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-27 00:56:02.611981 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-27 00:56:02.611985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-27 00:56:02.611989 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.611992 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-27 00:56:02.612013 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-27 00:56:02.612017 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-27 00:56:02.612021 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-27 00:56:02.612029 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-27 00:56:02.612033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-27 00:56:02.612036 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-27 00:56:02.612040 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-27 00:56:02.612044 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-27 00:56:02.612047 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612051 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612055 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-27 00:56:02.612059 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-27 00:56:02.612062 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-27 00:56:02.612066 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612070 | orchestrator | 2026-03-27 00:56:02.612076 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-27 00:56:02.612080 | orchestrator | Friday 27 March 2026 00:46:24 +0000 (0:00:01.023) 0:00:53.913 ********** 2026-03-27 00:56:02.612087 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612091 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612094 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612098 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.612102 | orchestrator | 2026-03-27 00:56:02.612106 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-27 00:56:02.612111 | orchestrator | Friday 27 March 2026 00:46:26 +0000 (0:00:01.628) 0:00:55.541 ********** 2026-03-27 00:56:02.612114 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612118 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.612122 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612126 | orchestrator | 2026-03-27 00:56:02.612129 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-27 00:56:02.612133 | orchestrator | Friday 27 March 2026 00:46:26 +0000 (0:00:00.428) 0:00:55.970 ********** 2026-03-27 00:56:02.612137 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612141 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.612144 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612148 | orchestrator | 2026-03-27 00:56:02.612152 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-27 00:56:02.612155 | orchestrator | Friday 27 March 2026 00:46:27 +0000 (0:00:00.474) 0:00:56.444 ********** 2026-03-27 00:56:02.612159 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612163 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.612167 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612171 | orchestrator | 2026-03-27 00:56:02.612174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-27 00:56:02.612178 | orchestrator | Friday 27 March 2026 00:46:27 +0000 (0:00:00.326) 0:00:56.771 ********** 2026-03-27 00:56:02.612182 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612186 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612189 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612193 | orchestrator | 2026-03-27 00:56:02.612197 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-27 00:56:02.612201 | orchestrator | Friday 27 March 2026 00:46:28 +0000 (0:00:00.976) 0:00:57.748 ********** 2026-03-27 00:56:02.612204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.612208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.612212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.612216 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612219 | orchestrator | 2026-03-27 00:56:02.612223 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-27 00:56:02.612227 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.346) 0:00:58.094 ********** 2026-03-27 00:56:02.612231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.612235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.612238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.612242 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612246 | orchestrator | 2026-03-27 00:56:02.612249 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-27 00:56:02.612253 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.342) 0:00:58.437 ********** 2026-03-27 00:56:02.612257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.612261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.612265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.612270 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612274 | orchestrator | 2026-03-27 00:56:02.612278 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-27 00:56:02.612285 | orchestrator | Friday 27 March 2026 00:46:29 +0000 (0:00:00.393) 0:00:58.830 ********** 2026-03-27 00:56:02.612290 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612294 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612299 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612303 | orchestrator | 2026-03-27 00:56:02.612307 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-27 00:56:02.612312 | orchestrator | Friday 27 March 2026 00:46:30 +0000 (0:00:00.315) 0:00:59.146 ********** 2026-03-27 00:56:02.612316 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-27 00:56:02.612320 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-27 00:56:02.612337 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-27 00:56:02.612342 | orchestrator | 2026-03-27 00:56:02.612347 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-27 00:56:02.612352 | orchestrator | Friday 27 March 2026 00:46:30 +0000 (0:00:00.796) 0:00:59.942 ********** 2026-03-27 00:56:02.612356 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-27 00:56:02.612361 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:56:02.612365 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:56:02.612370 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-27 00:56:02.612374 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-27 00:56:02.612378 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-27 00:56:02.612383 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-27 00:56:02.612387 | orchestrator | 2026-03-27 00:56:02.612393 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-27 00:56:02.612398 | orchestrator | Friday 27 March 2026 00:46:32 +0000 (0:00:01.078) 0:01:01.021 ********** 2026-03-27 00:56:02.612402 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-27 00:56:02.612407 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:56:02.612411 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:56:02.612415 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-27 00:56:02.612420 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-27 00:56:02.612424 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-27 00:56:02.612429 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-27 00:56:02.612433 | orchestrator | 2026-03-27 00:56:02.612437 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-27 00:56:02.612442 | orchestrator | Friday 27 March 2026 00:46:34 +0000 (0:00:02.173) 0:01:03.194 ********** 2026-03-27 00:56:02.612446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.612451 | orchestrator | 2026-03-27 00:56:02.612456 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-27 00:56:02.612460 | orchestrator | Friday 27 March 2026 00:46:35 +0000 (0:00:01.285) 0:01:04.479 ********** 2026-03-27 00:56:02.612464 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.612469 | orchestrator | 2026-03-27 00:56:02.612474 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-27 00:56:02.612478 | orchestrator | Friday 27 March 2026 00:46:36 +0000 (0:00:01.255) 0:01:05.735 ********** 2026-03-27 00:56:02.612482 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612491 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.612495 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612500 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.612504 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.612508 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.612513 | orchestrator | 2026-03-27 00:56:02.612517 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-27 00:56:02.612521 | orchestrator | Friday 27 March 2026 00:46:38 +0000 (0:00:01.483) 0:01:07.219 ********** 2026-03-27 00:56:02.612525 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612530 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612534 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612538 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612543 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612547 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612551 | orchestrator | 2026-03-27 00:56:02.612556 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-27 00:56:02.612560 | orchestrator | Friday 27 March 2026 00:46:39 +0000 (0:00:00.851) 0:01:08.070 ********** 2026-03-27 00:56:02.612564 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612569 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612573 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612577 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612581 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612586 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612590 | orchestrator | 2026-03-27 00:56:02.612594 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-27 00:56:02.612599 | orchestrator | Friday 27 March 2026 00:46:40 +0000 (0:00:00.962) 0:01:09.033 ********** 2026-03-27 00:56:02.612604 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612608 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612612 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612616 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612621 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612625 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612630 | orchestrator | 2026-03-27 00:56:02.612634 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-27 00:56:02.612638 | orchestrator | Friday 27 March 2026 00:46:40 +0000 (0:00:00.758) 0:01:09.792 ********** 2026-03-27 00:56:02.612643 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612647 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.612651 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612656 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.612660 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.612679 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.612683 | orchestrator | 2026-03-27 00:56:02.612687 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-27 00:56:02.612691 | orchestrator | Friday 27 March 2026 00:46:41 +0000 (0:00:00.912) 0:01:10.704 ********** 2026-03-27 00:56:02.612694 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612698 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.612702 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612706 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612710 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612714 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612717 | orchestrator | 2026-03-27 00:56:02.612721 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-27 00:56:02.612725 | orchestrator | Friday 27 March 2026 00:46:42 +0000 (0:00:00.791) 0:01:11.496 ********** 2026-03-27 00:56:02.612729 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612732 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.612736 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612740 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612743 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612750 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612753 | orchestrator | 2026-03-27 00:56:02.612760 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-27 00:56:02.612763 | orchestrator | Friday 27 March 2026 00:46:43 +0000 (0:00:00.577) 0:01:12.074 ********** 2026-03-27 00:56:02.612767 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612771 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612775 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612778 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.612782 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.612786 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.612790 | orchestrator | 2026-03-27 00:56:02.612793 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-27 00:56:02.612797 | orchestrator | Friday 27 March 2026 00:46:44 +0000 (0:00:01.585) 0:01:13.660 ********** 2026-03-27 00:56:02.612801 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612804 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612808 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612812 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.612815 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.612819 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.612823 | orchestrator | 2026-03-27 00:56:02.612826 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-27 00:56:02.612830 | orchestrator | Friday 27 March 2026 00:46:45 +0000 (0:00:01.260) 0:01:14.920 ********** 2026-03-27 00:56:02.612834 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612838 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.612841 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612845 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612849 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612853 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612856 | orchestrator | 2026-03-27 00:56:02.612860 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-27 00:56:02.612864 | orchestrator | Friday 27 March 2026 00:46:46 +0000 (0:00:00.689) 0:01:15.609 ********** 2026-03-27 00:56:02.612867 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.612871 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.612875 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.612887 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.612891 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.612894 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.612898 | orchestrator | 2026-03-27 00:56:02.612902 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-27 00:56:02.612906 | orchestrator | Friday 27 March 2026 00:46:47 +0000 (0:00:00.603) 0:01:16.213 ********** 2026-03-27 00:56:02.612910 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612914 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612917 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612921 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612925 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612929 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612932 | orchestrator | 2026-03-27 00:56:02.612936 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-27 00:56:02.612940 | orchestrator | Friday 27 March 2026 00:46:48 +0000 (0:00:01.248) 0:01:17.461 ********** 2026-03-27 00:56:02.612944 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612947 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612951 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612955 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612959 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.612963 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.612967 | orchestrator | 2026-03-27 00:56:02.612971 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-27 00:56:02.612975 | orchestrator | Friday 27 March 2026 00:46:49 +0000 (0:00:00.870) 0:01:18.332 ********** 2026-03-27 00:56:02.612982 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.612986 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.612989 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.612993 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.612997 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613000 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613004 | orchestrator | 2026-03-27 00:56:02.613008 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-27 00:56:02.613012 | orchestrator | Friday 27 March 2026 00:46:50 +0000 (0:00:00.866) 0:01:19.199 ********** 2026-03-27 00:56:02.613015 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613019 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613023 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613026 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613030 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613034 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613038 | orchestrator | 2026-03-27 00:56:02.613041 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-27 00:56:02.613045 | orchestrator | Friday 27 March 2026 00:46:50 +0000 (0:00:00.530) 0:01:19.729 ********** 2026-03-27 00:56:02.613049 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613052 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613056 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613060 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613077 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613081 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613085 | orchestrator | 2026-03-27 00:56:02.613089 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-27 00:56:02.613092 | orchestrator | Friday 27 March 2026 00:46:51 +0000 (0:00:00.827) 0:01:20.556 ********** 2026-03-27 00:56:02.613096 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613100 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613104 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613108 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.613112 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.613116 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.613119 | orchestrator | 2026-03-27 00:56:02.613123 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-27 00:56:02.613127 | orchestrator | Friday 27 March 2026 00:46:52 +0000 (0:00:00.706) 0:01:21.262 ********** 2026-03-27 00:56:02.613131 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.613134 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.613138 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.613142 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.613145 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.613149 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.613153 | orchestrator | 2026-03-27 00:56:02.613159 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-27 00:56:02.613163 | orchestrator | Friday 27 March 2026 00:46:53 +0000 (0:00:00.910) 0:01:22.173 ********** 2026-03-27 00:56:02.613166 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.613170 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.613174 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.613178 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.613181 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.613185 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.613189 | orchestrator | 2026-03-27 00:56:02.613192 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-27 00:56:02.613196 | orchestrator | Friday 27 March 2026 00:46:54 +0000 (0:00:01.424) 0:01:23.597 ********** 2026-03-27 00:56:02.613200 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.613204 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.613207 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.613211 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.613217 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.613221 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.613225 | orchestrator | 2026-03-27 00:56:02.613228 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-27 00:56:02.613232 | orchestrator | Friday 27 March 2026 00:46:56 +0000 (0:00:02.191) 0:01:25.789 ********** 2026-03-27 00:56:02.613236 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.613240 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.613243 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.613247 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.613251 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.613254 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.613258 | orchestrator | 2026-03-27 00:56:02.613262 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-27 00:56:02.613266 | orchestrator | Friday 27 March 2026 00:47:00 +0000 (0:00:03.494) 0:01:29.284 ********** 2026-03-27 00:56:02.613269 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.613273 | orchestrator | 2026-03-27 00:56:02.613277 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-27 00:56:02.613281 | orchestrator | Friday 27 March 2026 00:47:01 +0000 (0:00:01.110) 0:01:30.395 ********** 2026-03-27 00:56:02.613284 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613288 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613292 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613295 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613299 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613303 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613307 | orchestrator | 2026-03-27 00:56:02.613310 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-27 00:56:02.613314 | orchestrator | Friday 27 March 2026 00:47:01 +0000 (0:00:00.573) 0:01:30.968 ********** 2026-03-27 00:56:02.613318 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613321 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613325 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613329 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613332 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613336 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613340 | orchestrator | 2026-03-27 00:56:02.613343 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-27 00:56:02.613347 | orchestrator | Friday 27 March 2026 00:47:02 +0000 (0:00:00.860) 0:01:31.829 ********** 2026-03-27 00:56:02.613351 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-27 00:56:02.613355 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-27 00:56:02.613358 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-27 00:56:02.613362 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-27 00:56:02.613366 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-27 00:56:02.613369 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-27 00:56:02.613373 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-27 00:56:02.613377 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-27 00:56:02.613381 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-27 00:56:02.613384 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-27 00:56:02.613400 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-27 00:56:02.613405 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-27 00:56:02.613411 | orchestrator | 2026-03-27 00:56:02.613415 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-27 00:56:02.613419 | orchestrator | Friday 27 March 2026 00:47:04 +0000 (0:00:01.199) 0:01:33.029 ********** 2026-03-27 00:56:02.613422 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.613426 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.613430 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.613434 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.613437 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.613441 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.613445 | orchestrator | 2026-03-27 00:56:02.613449 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-27 00:56:02.613452 | orchestrator | Friday 27 March 2026 00:47:05 +0000 (0:00:01.190) 0:01:34.220 ********** 2026-03-27 00:56:02.613456 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613460 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613464 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613469 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613473 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613476 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613480 | orchestrator | 2026-03-27 00:56:02.613484 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-27 00:56:02.613488 | orchestrator | Friday 27 March 2026 00:47:05 +0000 (0:00:00.474) 0:01:34.695 ********** 2026-03-27 00:56:02.613491 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613495 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613499 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613502 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613506 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613510 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613514 | orchestrator | 2026-03-27 00:56:02.613518 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-27 00:56:02.613521 | orchestrator | Friday 27 March 2026 00:47:06 +0000 (0:00:00.648) 0:01:35.343 ********** 2026-03-27 00:56:02.613525 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613529 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613532 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613536 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613540 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613543 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613547 | orchestrator | 2026-03-27 00:56:02.613551 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-27 00:56:02.613554 | orchestrator | Friday 27 March 2026 00:47:06 +0000 (0:00:00.471) 0:01:35.815 ********** 2026-03-27 00:56:02.613558 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.613562 | orchestrator | 2026-03-27 00:56:02.613566 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-27 00:56:02.613570 | orchestrator | Friday 27 March 2026 00:47:07 +0000 (0:00:01.085) 0:01:36.900 ********** 2026-03-27 00:56:02.613573 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.613577 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.613581 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.613585 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.613588 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.613592 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.613596 | orchestrator | 2026-03-27 00:56:02.613600 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-27 00:56:02.613603 | orchestrator | Friday 27 March 2026 00:48:15 +0000 (0:01:07.809) 0:02:44.709 ********** 2026-03-27 00:56:02.613607 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-27 00:56:02.613613 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-27 00:56:02.613617 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-27 00:56:02.613620 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-27 00:56:02.613624 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-27 00:56:02.613628 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-27 00:56:02.613631 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613635 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-27 00:56:02.613639 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-27 00:56:02.613643 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-27 00:56:02.613646 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613650 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-27 00:56:02.613654 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-27 00:56:02.613658 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-27 00:56:02.613661 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613665 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-27 00:56:02.613669 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-27 00:56:02.613672 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-27 00:56:02.613676 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613680 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613696 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-27 00:56:02.613700 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-27 00:56:02.613704 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-27 00:56:02.613708 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613711 | orchestrator | 2026-03-27 00:56:02.613715 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-27 00:56:02.613719 | orchestrator | Friday 27 March 2026 00:48:16 +0000 (0:00:00.728) 0:02:45.438 ********** 2026-03-27 00:56:02.613723 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613727 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613730 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613734 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613738 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613741 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613745 | orchestrator | 2026-03-27 00:56:02.613749 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-27 00:56:02.613752 | orchestrator | Friday 27 March 2026 00:48:17 +0000 (0:00:00.832) 0:02:46.270 ********** 2026-03-27 00:56:02.613756 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613760 | orchestrator | 2026-03-27 00:56:02.613766 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-27 00:56:02.613770 | orchestrator | Friday 27 March 2026 00:48:17 +0000 (0:00:00.134) 0:02:46.405 ********** 2026-03-27 00:56:02.613774 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613778 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613782 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613785 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613789 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613793 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613796 | orchestrator | 2026-03-27 00:56:02.613800 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-27 00:56:02.613807 | orchestrator | Friday 27 March 2026 00:48:18 +0000 (0:00:00.655) 0:02:47.061 ********** 2026-03-27 00:56:02.613810 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613814 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613818 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613821 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613825 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613829 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613832 | orchestrator | 2026-03-27 00:56:02.613836 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-27 00:56:02.613840 | orchestrator | Friday 27 March 2026 00:48:19 +0000 (0:00:00.925) 0:02:47.986 ********** 2026-03-27 00:56:02.613844 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613847 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613851 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613855 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613858 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613862 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613866 | orchestrator | 2026-03-27 00:56:02.613870 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-27 00:56:02.613873 | orchestrator | Friday 27 March 2026 00:48:19 +0000 (0:00:00.605) 0:02:48.592 ********** 2026-03-27 00:56:02.613877 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.613888 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.613891 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.613895 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.613899 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.613903 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.613907 | orchestrator | 2026-03-27 00:56:02.613910 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-27 00:56:02.613914 | orchestrator | Friday 27 March 2026 00:48:21 +0000 (0:00:02.315) 0:02:50.907 ********** 2026-03-27 00:56:02.613918 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.613922 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.613925 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.613929 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.613933 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.613937 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.613940 | orchestrator | 2026-03-27 00:56:02.613944 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-27 00:56:02.613948 | orchestrator | Friday 27 March 2026 00:48:22 +0000 (0:00:00.633) 0:02:51.540 ********** 2026-03-27 00:56:02.613952 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.613956 | orchestrator | 2026-03-27 00:56:02.613960 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-27 00:56:02.613964 | orchestrator | Friday 27 March 2026 00:48:23 +0000 (0:00:01.294) 0:02:52.835 ********** 2026-03-27 00:56:02.613967 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.613971 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.613975 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.613979 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.613983 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.613986 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.613990 | orchestrator | 2026-03-27 00:56:02.613994 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-27 00:56:02.613998 | orchestrator | Friday 27 March 2026 00:48:24 +0000 (0:00:00.643) 0:02:53.479 ********** 2026-03-27 00:56:02.614002 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614005 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614009 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614033 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614037 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614044 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614047 | orchestrator | 2026-03-27 00:56:02.614051 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-27 00:56:02.614055 | orchestrator | Friday 27 March 2026 00:48:25 +0000 (0:00:00.842) 0:02:54.321 ********** 2026-03-27 00:56:02.614059 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614062 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614081 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614085 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614089 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614092 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614096 | orchestrator | 2026-03-27 00:56:02.614100 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-27 00:56:02.614103 | orchestrator | Friday 27 March 2026 00:48:26 +0000 (0:00:00.679) 0:02:55.001 ********** 2026-03-27 00:56:02.614107 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614111 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614115 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614118 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614122 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614126 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614129 | orchestrator | 2026-03-27 00:56:02.614133 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-27 00:56:02.614137 | orchestrator | Friday 27 March 2026 00:48:26 +0000 (0:00:00.861) 0:02:55.863 ********** 2026-03-27 00:56:02.614141 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614144 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614148 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614152 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614155 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614161 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614165 | orchestrator | 2026-03-27 00:56:02.614169 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-27 00:56:02.614173 | orchestrator | Friday 27 March 2026 00:48:27 +0000 (0:00:00.661) 0:02:56.524 ********** 2026-03-27 00:56:02.614176 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614180 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614184 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614188 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614191 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614195 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614199 | orchestrator | 2026-03-27 00:56:02.614202 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-27 00:56:02.614206 | orchestrator | Friday 27 March 2026 00:48:28 +0000 (0:00:00.690) 0:02:57.215 ********** 2026-03-27 00:56:02.614210 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614214 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614217 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614221 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614225 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614228 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614232 | orchestrator | 2026-03-27 00:56:02.614236 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-27 00:56:02.614239 | orchestrator | Friday 27 March 2026 00:48:28 +0000 (0:00:00.623) 0:02:57.839 ********** 2026-03-27 00:56:02.614243 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614247 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614251 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614254 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614258 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614262 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614265 | orchestrator | 2026-03-27 00:56:02.614269 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-27 00:56:02.614276 | orchestrator | Friday 27 March 2026 00:48:29 +0000 (0:00:00.876) 0:02:58.716 ********** 2026-03-27 00:56:02.614280 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.614283 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.614287 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.614291 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.614295 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.614298 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.614302 | orchestrator | 2026-03-27 00:56:02.614306 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-27 00:56:02.614310 | orchestrator | Friday 27 March 2026 00:48:30 +0000 (0:00:01.214) 0:02:59.931 ********** 2026-03-27 00:56:02.614313 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.614317 | orchestrator | 2026-03-27 00:56:02.614321 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-27 00:56:02.614325 | orchestrator | Friday 27 March 2026 00:48:32 +0000 (0:00:01.207) 0:03:01.138 ********** 2026-03-27 00:56:02.614328 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-27 00:56:02.614332 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-27 00:56:02.614336 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-27 00:56:02.614340 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-27 00:56:02.614343 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-27 00:56:02.614347 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-27 00:56:02.614351 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-27 00:56:02.614355 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-27 00:56:02.614358 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-27 00:56:02.614362 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-27 00:56:02.614366 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-27 00:56:02.614370 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-27 00:56:02.614373 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-27 00:56:02.614377 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-27 00:56:02.614381 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-27 00:56:02.614384 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-27 00:56:02.614388 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-27 00:56:02.614392 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-27 00:56:02.614408 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-27 00:56:02.614413 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-27 00:56:02.614416 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-27 00:56:02.614420 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-27 00:56:02.614424 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-27 00:56:02.614427 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-27 00:56:02.614431 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-27 00:56:02.614435 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-27 00:56:02.614438 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-27 00:56:02.614442 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-27 00:56:02.614446 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-27 00:56:02.614449 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-27 00:56:02.614453 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-27 00:56:02.614457 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-27 00:56:02.614467 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-27 00:56:02.614471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-27 00:56:02.614474 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-27 00:56:02.614478 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-27 00:56:02.614482 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-27 00:56:02.614486 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-27 00:56:02.614489 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-27 00:56:02.614493 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-27 00:56:02.614497 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-27 00:56:02.614501 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-27 00:56:02.614504 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-27 00:56:02.614508 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-27 00:56:02.614512 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-27 00:56:02.614516 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-27 00:56:02.614519 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-27 00:56:02.614523 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-27 00:56:02.614527 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-27 00:56:02.614530 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-27 00:56:02.614534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-27 00:56:02.614538 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-27 00:56:02.614541 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-27 00:56:02.614545 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-27 00:56:02.614549 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-27 00:56:02.614552 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-27 00:56:02.614556 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-27 00:56:02.614560 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-27 00:56:02.614563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-27 00:56:02.614567 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-27 00:56:02.614571 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-27 00:56:02.614575 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-27 00:56:02.614578 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-27 00:56:02.614582 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-27 00:56:02.614586 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-27 00:56:02.614589 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-27 00:56:02.614593 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-27 00:56:02.614597 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-27 00:56:02.614600 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-27 00:56:02.614604 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-27 00:56:02.614608 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-27 00:56:02.614612 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-27 00:56:02.614619 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-27 00:56:02.614623 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-27 00:56:02.614627 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-27 00:56:02.614631 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-27 00:56:02.614647 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-27 00:56:02.614651 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-27 00:56:02.614655 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-27 00:56:02.614659 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-27 00:56:02.614663 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-27 00:56:02.614666 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-27 00:56:02.614670 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-27 00:56:02.614674 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-27 00:56:02.614678 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-27 00:56:02.614681 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-27 00:56:02.614685 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-27 00:56:02.614689 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-27 00:56:02.614695 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-27 00:56:02.614699 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-27 00:56:02.614702 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-27 00:56:02.614706 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-27 00:56:02.614710 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-27 00:56:02.614714 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-27 00:56:02.614717 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-27 00:56:02.614721 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-27 00:56:02.614725 | orchestrator | 2026-03-27 00:56:02.614728 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-27 00:56:02.614732 | orchestrator | Friday 27 March 2026 00:48:39 +0000 (0:00:07.228) 0:03:08.366 ********** 2026-03-27 00:56:02.614736 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614740 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614743 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614747 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.614751 | orchestrator | 2026-03-27 00:56:02.614755 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-27 00:56:02.614758 | orchestrator | Friday 27 March 2026 00:48:40 +0000 (0:00:00.920) 0:03:09.286 ********** 2026-03-27 00:56:02.614762 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.614766 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.614770 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.614774 | orchestrator | 2026-03-27 00:56:02.614778 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-27 00:56:02.614781 | orchestrator | Friday 27 March 2026 00:48:41 +0000 (0:00:00.809) 0:03:10.096 ********** 2026-03-27 00:56:02.614785 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.614791 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.614795 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.614799 | orchestrator | 2026-03-27 00:56:02.614803 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-27 00:56:02.614807 | orchestrator | Friday 27 March 2026 00:48:42 +0000 (0:00:01.741) 0:03:11.838 ********** 2026-03-27 00:56:02.614810 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.614814 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.614818 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.614822 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614825 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614829 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614833 | orchestrator | 2026-03-27 00:56:02.614836 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-27 00:56:02.614840 | orchestrator | Friday 27 March 2026 00:48:43 +0000 (0:00:00.540) 0:03:12.378 ********** 2026-03-27 00:56:02.614844 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.614848 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.614851 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.614855 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614859 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614863 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614866 | orchestrator | 2026-03-27 00:56:02.614870 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-27 00:56:02.614874 | orchestrator | Friday 27 March 2026 00:48:43 +0000 (0:00:00.522) 0:03:12.901 ********** 2026-03-27 00:56:02.614877 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614902 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614906 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614910 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614914 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614918 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614922 | orchestrator | 2026-03-27 00:56:02.614939 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-27 00:56:02.614944 | orchestrator | Friday 27 March 2026 00:48:44 +0000 (0:00:00.726) 0:03:13.628 ********** 2026-03-27 00:56:02.614948 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614951 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614955 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614959 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614963 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.614967 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.614971 | orchestrator | 2026-03-27 00:56:02.614975 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-27 00:56:02.614979 | orchestrator | Friday 27 March 2026 00:48:45 +0000 (0:00:00.487) 0:03:14.116 ********** 2026-03-27 00:56:02.614983 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.614987 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.614991 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.614994 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.614998 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615002 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615006 | orchestrator | 2026-03-27 00:56:02.615010 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-27 00:56:02.615016 | orchestrator | Friday 27 March 2026 00:48:45 +0000 (0:00:00.787) 0:03:14.904 ********** 2026-03-27 00:56:02.615020 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615024 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615028 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615035 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615039 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615043 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615047 | orchestrator | 2026-03-27 00:56:02.615051 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-27 00:56:02.615055 | orchestrator | Friday 27 March 2026 00:48:46 +0000 (0:00:00.805) 0:03:15.709 ********** 2026-03-27 00:56:02.615059 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615063 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615067 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615071 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615074 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615078 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615082 | orchestrator | 2026-03-27 00:56:02.615086 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-27 00:56:02.615090 | orchestrator | Friday 27 March 2026 00:48:47 +0000 (0:00:01.165) 0:03:16.875 ********** 2026-03-27 00:56:02.615094 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615098 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615102 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615106 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615110 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615114 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615118 | orchestrator | 2026-03-27 00:56:02.615122 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-27 00:56:02.615126 | orchestrator | Friday 27 March 2026 00:48:48 +0000 (0:00:00.575) 0:03:17.450 ********** 2026-03-27 00:56:02.615130 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615134 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615138 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615142 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.615146 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.615150 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.615154 | orchestrator | 2026-03-27 00:56:02.615158 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-27 00:56:02.615162 | orchestrator | Friday 27 March 2026 00:48:50 +0000 (0:00:01.696) 0:03:19.147 ********** 2026-03-27 00:56:02.615166 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.615170 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.615174 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.615178 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615182 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615186 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615190 | orchestrator | 2026-03-27 00:56:02.615194 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-27 00:56:02.615197 | orchestrator | Friday 27 March 2026 00:48:50 +0000 (0:00:00.734) 0:03:19.882 ********** 2026-03-27 00:56:02.615201 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.615205 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.615209 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615213 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.615217 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615221 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615225 | orchestrator | 2026-03-27 00:56:02.615229 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-27 00:56:02.615233 | orchestrator | Friday 27 March 2026 00:48:51 +0000 (0:00:00.905) 0:03:20.787 ********** 2026-03-27 00:56:02.615237 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615241 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615245 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615249 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615253 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615260 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615264 | orchestrator | 2026-03-27 00:56:02.615268 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-27 00:56:02.615272 | orchestrator | Friday 27 March 2026 00:48:52 +0000 (0:00:00.875) 0:03:21.663 ********** 2026-03-27 00:56:02.615276 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.615281 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.615285 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.615289 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615306 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615310 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615314 | orchestrator | 2026-03-27 00:56:02.615318 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-27 00:56:02.615322 | orchestrator | Friday 27 March 2026 00:48:53 +0000 (0:00:00.803) 0:03:22.466 ********** 2026-03-27 00:56:02.615327 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-27 00:56:02.615332 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-27 00:56:02.615339 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-27 00:56:02.615343 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-27 00:56:02.615347 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615351 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-27 00:56:02.615355 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-27 00:56:02.615359 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615363 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615366 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615370 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615374 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615378 | orchestrator | 2026-03-27 00:56:02.615382 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-27 00:56:02.615386 | orchestrator | Friday 27 March 2026 00:48:54 +0000 (0:00:00.676) 0:03:23.142 ********** 2026-03-27 00:56:02.615390 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615393 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615400 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615404 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615408 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615412 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615416 | orchestrator | 2026-03-27 00:56:02.615420 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-27 00:56:02.615424 | orchestrator | Friday 27 March 2026 00:48:54 +0000 (0:00:00.759) 0:03:23.902 ********** 2026-03-27 00:56:02.615428 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615432 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615435 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615439 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615443 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615447 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615451 | orchestrator | 2026-03-27 00:56:02.615455 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-27 00:56:02.615459 | orchestrator | Friday 27 March 2026 00:48:55 +0000 (0:00:00.489) 0:03:24.391 ********** 2026-03-27 00:56:02.615463 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615466 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615470 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615474 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615478 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615482 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615486 | orchestrator | 2026-03-27 00:56:02.615490 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-27 00:56:02.615493 | orchestrator | Friday 27 March 2026 00:48:56 +0000 (0:00:00.733) 0:03:25.124 ********** 2026-03-27 00:56:02.615497 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615501 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615505 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615509 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615513 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615517 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615521 | orchestrator | 2026-03-27 00:56:02.615524 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-27 00:56:02.615540 | orchestrator | Friday 27 March 2026 00:48:56 +0000 (0:00:00.594) 0:03:25.719 ********** 2026-03-27 00:56:02.615544 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615548 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615552 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615558 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615564 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615570 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615577 | orchestrator | 2026-03-27 00:56:02.615582 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-27 00:56:02.615587 | orchestrator | Friday 27 March 2026 00:48:57 +0000 (0:00:00.672) 0:03:26.391 ********** 2026-03-27 00:56:02.615594 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.615600 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.615606 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.615612 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615618 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615624 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615630 | orchestrator | 2026-03-27 00:56:02.615637 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-27 00:56:02.615642 | orchestrator | Friday 27 March 2026 00:48:58 +0000 (0:00:00.634) 0:03:27.026 ********** 2026-03-27 00:56:02.615649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.615652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.615656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.615664 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615667 | orchestrator | 2026-03-27 00:56:02.615671 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-27 00:56:02.615675 | orchestrator | Friday 27 March 2026 00:48:58 +0000 (0:00:00.382) 0:03:27.408 ********** 2026-03-27 00:56:02.615679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.615682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.615686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.615690 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615694 | orchestrator | 2026-03-27 00:56:02.615697 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-27 00:56:02.615701 | orchestrator | Friday 27 March 2026 00:48:58 +0000 (0:00:00.485) 0:03:27.894 ********** 2026-03-27 00:56:02.615705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.615708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.615712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.615716 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615720 | orchestrator | 2026-03-27 00:56:02.615723 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-27 00:56:02.615727 | orchestrator | Friday 27 March 2026 00:48:59 +0000 (0:00:00.593) 0:03:28.487 ********** 2026-03-27 00:56:02.615731 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.615735 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.615738 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.615742 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615746 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615749 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615753 | orchestrator | 2026-03-27 00:56:02.615757 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-27 00:56:02.615761 | orchestrator | Friday 27 March 2026 00:49:00 +0000 (0:00:00.886) 0:03:29.374 ********** 2026-03-27 00:56:02.615764 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-27 00:56:02.615768 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-27 00:56:02.615772 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-27 00:56:02.615775 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.615779 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-27 00:56:02.615783 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-27 00:56:02.615786 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.615790 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-27 00:56:02.615804 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.615808 | orchestrator | 2026-03-27 00:56:02.615812 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-27 00:56:02.615822 | orchestrator | Friday 27 March 2026 00:49:02 +0000 (0:00:01.770) 0:03:31.145 ********** 2026-03-27 00:56:02.615825 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.615829 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.615833 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.615859 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.615868 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.615872 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.615875 | orchestrator | 2026-03-27 00:56:02.615890 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-27 00:56:02.615894 | orchestrator | Friday 27 March 2026 00:49:04 +0000 (0:00:02.522) 0:03:33.667 ********** 2026-03-27 00:56:02.615898 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.615901 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.615905 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.615909 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.615912 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.615916 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.615923 | orchestrator | 2026-03-27 00:56:02.615927 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-27 00:56:02.615931 | orchestrator | Friday 27 March 2026 00:49:06 +0000 (0:00:01.416) 0:03:35.083 ********** 2026-03-27 00:56:02.615934 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.615938 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.615942 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.615945 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.615949 | orchestrator | 2026-03-27 00:56:02.615953 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-27 00:56:02.615973 | orchestrator | Friday 27 March 2026 00:49:06 +0000 (0:00:00.885) 0:03:35.968 ********** 2026-03-27 00:56:02.615978 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.615982 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.615986 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.615989 | orchestrator | 2026-03-27 00:56:02.615993 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-27 00:56:02.615997 | orchestrator | Friday 27 March 2026 00:49:07 +0000 (0:00:00.284) 0:03:36.252 ********** 2026-03-27 00:56:02.616001 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.616005 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.616008 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.616012 | orchestrator | 2026-03-27 00:56:02.616016 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-27 00:56:02.616020 | orchestrator | Friday 27 March 2026 00:49:08 +0000 (0:00:01.171) 0:03:37.424 ********** 2026-03-27 00:56:02.616023 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-27 00:56:02.616027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-27 00:56:02.616031 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-27 00:56:02.616035 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616038 | orchestrator | 2026-03-27 00:56:02.616042 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-27 00:56:02.616049 | orchestrator | Friday 27 March 2026 00:49:09 +0000 (0:00:00.746) 0:03:38.170 ********** 2026-03-27 00:56:02.616052 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.616056 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.616060 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.616064 | orchestrator | 2026-03-27 00:56:02.616067 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-27 00:56:02.616072 | orchestrator | Friday 27 March 2026 00:49:09 +0000 (0:00:00.297) 0:03:38.468 ********** 2026-03-27 00:56:02.616075 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616079 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.616083 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.616087 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.616090 | orchestrator | 2026-03-27 00:56:02.616094 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-27 00:56:02.616098 | orchestrator | Friday 27 March 2026 00:49:10 +0000 (0:00:00.934) 0:03:39.402 ********** 2026-03-27 00:56:02.616101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.616105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.616109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.616113 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616116 | orchestrator | 2026-03-27 00:56:02.616120 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-27 00:56:02.616124 | orchestrator | Friday 27 March 2026 00:49:10 +0000 (0:00:00.382) 0:03:39.785 ********** 2026-03-27 00:56:02.616127 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616131 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.616138 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.616141 | orchestrator | 2026-03-27 00:56:02.616145 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-27 00:56:02.616149 | orchestrator | Friday 27 March 2026 00:49:11 +0000 (0:00:00.290) 0:03:40.075 ********** 2026-03-27 00:56:02.616152 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616156 | orchestrator | 2026-03-27 00:56:02.616160 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-27 00:56:02.616164 | orchestrator | Friday 27 March 2026 00:49:11 +0000 (0:00:00.626) 0:03:40.702 ********** 2026-03-27 00:56:02.616167 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616171 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.616175 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.616178 | orchestrator | 2026-03-27 00:56:02.616182 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-27 00:56:02.616186 | orchestrator | Friday 27 March 2026 00:49:12 +0000 (0:00:00.331) 0:03:41.034 ********** 2026-03-27 00:56:02.616189 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616193 | orchestrator | 2026-03-27 00:56:02.616197 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-27 00:56:02.616201 | orchestrator | Friday 27 March 2026 00:49:12 +0000 (0:00:00.241) 0:03:41.275 ********** 2026-03-27 00:56:02.616204 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616208 | orchestrator | 2026-03-27 00:56:02.616212 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-27 00:56:02.616215 | orchestrator | Friday 27 March 2026 00:49:12 +0000 (0:00:00.204) 0:03:41.480 ********** 2026-03-27 00:56:02.616219 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616223 | orchestrator | 2026-03-27 00:56:02.616226 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-27 00:56:02.616230 | orchestrator | Friday 27 March 2026 00:49:12 +0000 (0:00:00.106) 0:03:41.586 ********** 2026-03-27 00:56:02.616234 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616237 | orchestrator | 2026-03-27 00:56:02.616241 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-27 00:56:02.616245 | orchestrator | Friday 27 March 2026 00:49:12 +0000 (0:00:00.199) 0:03:41.786 ********** 2026-03-27 00:56:02.616248 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616252 | orchestrator | 2026-03-27 00:56:02.616256 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-27 00:56:02.616259 | orchestrator | Friday 27 March 2026 00:49:12 +0000 (0:00:00.189) 0:03:41.976 ********** 2026-03-27 00:56:02.616263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.616267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.616271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.616274 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616278 | orchestrator | 2026-03-27 00:56:02.616282 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-27 00:56:02.616297 | orchestrator | Friday 27 March 2026 00:49:13 +0000 (0:00:00.359) 0:03:42.335 ********** 2026-03-27 00:56:02.616301 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616305 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.616309 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.616313 | orchestrator | 2026-03-27 00:56:02.616317 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-27 00:56:02.616320 | orchestrator | Friday 27 March 2026 00:49:13 +0000 (0:00:00.472) 0:03:42.807 ********** 2026-03-27 00:56:02.616324 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616328 | orchestrator | 2026-03-27 00:56:02.616331 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-27 00:56:02.616335 | orchestrator | Friday 27 March 2026 00:49:14 +0000 (0:00:00.217) 0:03:43.025 ********** 2026-03-27 00:56:02.616339 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616345 | orchestrator | 2026-03-27 00:56:02.616349 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-27 00:56:02.616352 | orchestrator | Friday 27 March 2026 00:49:14 +0000 (0:00:00.221) 0:03:43.247 ********** 2026-03-27 00:56:02.616356 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616360 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.616364 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.616371 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.616375 | orchestrator | 2026-03-27 00:56:02.616379 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-27 00:56:02.616383 | orchestrator | Friday 27 March 2026 00:49:15 +0000 (0:00:00.780) 0:03:44.027 ********** 2026-03-27 00:56:02.616386 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.616390 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.616394 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.616398 | orchestrator | 2026-03-27 00:56:02.616401 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-27 00:56:02.616405 | orchestrator | Friday 27 March 2026 00:49:15 +0000 (0:00:00.497) 0:03:44.525 ********** 2026-03-27 00:56:02.616409 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.616413 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.616416 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.616420 | orchestrator | 2026-03-27 00:56:02.616424 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-27 00:56:02.616427 | orchestrator | Friday 27 March 2026 00:49:16 +0000 (0:00:01.044) 0:03:45.569 ********** 2026-03-27 00:56:02.616431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.616435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.616439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.616442 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616446 | orchestrator | 2026-03-27 00:56:02.616450 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-27 00:56:02.616453 | orchestrator | Friday 27 March 2026 00:49:17 +0000 (0:00:00.545) 0:03:46.115 ********** 2026-03-27 00:56:02.616457 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.616461 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.616465 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.616468 | orchestrator | 2026-03-27 00:56:02.616472 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-27 00:56:02.616476 | orchestrator | Friday 27 March 2026 00:49:17 +0000 (0:00:00.281) 0:03:46.397 ********** 2026-03-27 00:56:02.616479 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.616483 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.616487 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616491 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.616494 | orchestrator | 2026-03-27 00:56:02.616498 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-27 00:56:02.616502 | orchestrator | Friday 27 March 2026 00:49:18 +0000 (0:00:01.047) 0:03:47.445 ********** 2026-03-27 00:56:02.616506 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.616509 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.616513 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.616517 | orchestrator | 2026-03-27 00:56:02.616520 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-27 00:56:02.616524 | orchestrator | Friday 27 March 2026 00:49:18 +0000 (0:00:00.294) 0:03:47.739 ********** 2026-03-27 00:56:02.616528 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.616532 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.616535 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.616539 | orchestrator | 2026-03-27 00:56:02.616543 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-27 00:56:02.616549 | orchestrator | Friday 27 March 2026 00:49:20 +0000 (0:00:01.385) 0:03:49.124 ********** 2026-03-27 00:56:02.616553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.616556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.616560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.616564 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616567 | orchestrator | 2026-03-27 00:56:02.616571 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-27 00:56:02.616575 | orchestrator | Friday 27 March 2026 00:49:20 +0000 (0:00:00.580) 0:03:49.705 ********** 2026-03-27 00:56:02.616579 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.616582 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.616586 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.616590 | orchestrator | 2026-03-27 00:56:02.616594 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-27 00:56:02.616597 | orchestrator | Friday 27 March 2026 00:49:21 +0000 (0:00:00.292) 0:03:49.997 ********** 2026-03-27 00:56:02.616601 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616605 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.616608 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.616612 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616616 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.616630 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.616635 | orchestrator | 2026-03-27 00:56:02.616639 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-27 00:56:02.616642 | orchestrator | Friday 27 March 2026 00:49:21 +0000 (0:00:00.547) 0:03:50.544 ********** 2026-03-27 00:56:02.616646 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.616650 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.616654 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.616657 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.616661 | orchestrator | 2026-03-27 00:56:02.616665 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-27 00:56:02.616669 | orchestrator | Friday 27 March 2026 00:49:22 +0000 (0:00:01.076) 0:03:51.621 ********** 2026-03-27 00:56:02.616673 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.616676 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.616680 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.616684 | orchestrator | 2026-03-27 00:56:02.616688 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-27 00:56:02.616691 | orchestrator | Friday 27 March 2026 00:49:22 +0000 (0:00:00.299) 0:03:51.921 ********** 2026-03-27 00:56:02.616697 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.616701 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.616705 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.616708 | orchestrator | 2026-03-27 00:56:02.616712 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-27 00:56:02.616716 | orchestrator | Friday 27 March 2026 00:49:24 +0000 (0:00:01.419) 0:03:53.341 ********** 2026-03-27 00:56:02.616720 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-27 00:56:02.616723 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-27 00:56:02.616727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-27 00:56:02.616731 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616735 | orchestrator | 2026-03-27 00:56:02.616738 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-27 00:56:02.616742 | orchestrator | Friday 27 March 2026 00:49:24 +0000 (0:00:00.595) 0:03:53.937 ********** 2026-03-27 00:56:02.616746 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.616750 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.616753 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.616760 | orchestrator | 2026-03-27 00:56:02.616764 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-27 00:56:02.616767 | orchestrator | 2026-03-27 00:56:02.616771 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-27 00:56:02.616775 | orchestrator | Friday 27 March 2026 00:49:25 +0000 (0:00:00.531) 0:03:54.468 ********** 2026-03-27 00:56:02.616779 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.616783 | orchestrator | 2026-03-27 00:56:02.616786 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-27 00:56:02.616790 | orchestrator | Friday 27 March 2026 00:49:26 +0000 (0:00:00.646) 0:03:55.115 ********** 2026-03-27 00:56:02.616794 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.616798 | orchestrator | 2026-03-27 00:56:02.616801 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-27 00:56:02.616805 | orchestrator | Friday 27 March 2026 00:49:26 +0000 (0:00:00.537) 0:03:55.653 ********** 2026-03-27 00:56:02.616809 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.616813 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.616816 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.616820 | orchestrator | 2026-03-27 00:56:02.616824 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-27 00:56:02.616828 | orchestrator | Friday 27 March 2026 00:49:27 +0000 (0:00:00.857) 0:03:56.510 ********** 2026-03-27 00:56:02.616831 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616835 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.616839 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.616843 | orchestrator | 2026-03-27 00:56:02.616846 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-27 00:56:02.616850 | orchestrator | Friday 27 March 2026 00:49:27 +0000 (0:00:00.330) 0:03:56.841 ********** 2026-03-27 00:56:02.616854 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616858 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.616861 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.616865 | orchestrator | 2026-03-27 00:56:02.616869 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-27 00:56:02.616872 | orchestrator | Friday 27 March 2026 00:49:28 +0000 (0:00:00.569) 0:03:57.410 ********** 2026-03-27 00:56:02.616876 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616889 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.616893 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.616897 | orchestrator | 2026-03-27 00:56:02.616901 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-27 00:56:02.616905 | orchestrator | Friday 27 March 2026 00:49:28 +0000 (0:00:00.300) 0:03:57.711 ********** 2026-03-27 00:56:02.616908 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.616912 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.616916 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.616919 | orchestrator | 2026-03-27 00:56:02.616923 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-27 00:56:02.616927 | orchestrator | Friday 27 March 2026 00:49:29 +0000 (0:00:00.867) 0:03:58.579 ********** 2026-03-27 00:56:02.616931 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616935 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.616938 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.616942 | orchestrator | 2026-03-27 00:56:02.616946 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-27 00:56:02.616950 | orchestrator | Friday 27 March 2026 00:49:30 +0000 (0:00:00.509) 0:03:59.088 ********** 2026-03-27 00:56:02.616966 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.616970 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.616974 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.616980 | orchestrator | 2026-03-27 00:56:02.616984 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-27 00:56:02.616988 | orchestrator | Friday 27 March 2026 00:49:30 +0000 (0:00:00.648) 0:03:59.736 ********** 2026-03-27 00:56:02.616992 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.616995 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.616999 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617003 | orchestrator | 2026-03-27 00:56:02.617007 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-27 00:56:02.617010 | orchestrator | Friday 27 March 2026 00:49:31 +0000 (0:00:00.871) 0:04:00.607 ********** 2026-03-27 00:56:02.617014 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617018 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617022 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617025 | orchestrator | 2026-03-27 00:56:02.617029 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-27 00:56:02.617033 | orchestrator | Friday 27 March 2026 00:49:32 +0000 (0:00:00.767) 0:04:01.375 ********** 2026-03-27 00:56:02.617037 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617040 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.617046 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.617050 | orchestrator | 2026-03-27 00:56:02.617054 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-27 00:56:02.617058 | orchestrator | Friday 27 March 2026 00:49:32 +0000 (0:00:00.388) 0:04:01.763 ********** 2026-03-27 00:56:02.617061 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617065 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617069 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617073 | orchestrator | 2026-03-27 00:56:02.617076 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-27 00:56:02.617080 | orchestrator | Friday 27 March 2026 00:49:33 +0000 (0:00:00.887) 0:04:02.651 ********** 2026-03-27 00:56:02.617084 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617088 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.617092 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.617095 | orchestrator | 2026-03-27 00:56:02.617099 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-27 00:56:02.617103 | orchestrator | Friday 27 March 2026 00:49:34 +0000 (0:00:00.413) 0:04:03.065 ********** 2026-03-27 00:56:02.617107 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617110 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.617114 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.617118 | orchestrator | 2026-03-27 00:56:02.617122 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-27 00:56:02.617126 | orchestrator | Friday 27 March 2026 00:49:34 +0000 (0:00:00.485) 0:04:03.550 ********** 2026-03-27 00:56:02.617129 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617133 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.617137 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.617141 | orchestrator | 2026-03-27 00:56:02.617145 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-27 00:56:02.617149 | orchestrator | Friday 27 March 2026 00:49:35 +0000 (0:00:00.452) 0:04:04.002 ********** 2026-03-27 00:56:02.617152 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617156 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.617160 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.617164 | orchestrator | 2026-03-27 00:56:02.617167 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-27 00:56:02.617171 | orchestrator | Friday 27 March 2026 00:49:35 +0000 (0:00:00.647) 0:04:04.650 ********** 2026-03-27 00:56:02.617175 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617179 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.617182 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.617186 | orchestrator | 2026-03-27 00:56:02.617190 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-27 00:56:02.617196 | orchestrator | Friday 27 March 2026 00:49:35 +0000 (0:00:00.284) 0:04:04.935 ********** 2026-03-27 00:56:02.617200 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617203 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617207 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617211 | orchestrator | 2026-03-27 00:56:02.617215 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-27 00:56:02.617218 | orchestrator | Friday 27 March 2026 00:49:36 +0000 (0:00:00.320) 0:04:05.255 ********** 2026-03-27 00:56:02.617222 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617226 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617229 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617233 | orchestrator | 2026-03-27 00:56:02.617237 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-27 00:56:02.617240 | orchestrator | Friday 27 March 2026 00:49:36 +0000 (0:00:00.366) 0:04:05.622 ********** 2026-03-27 00:56:02.617244 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617248 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617251 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617255 | orchestrator | 2026-03-27 00:56:02.617259 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-27 00:56:02.617263 | orchestrator | Friday 27 March 2026 00:49:37 +0000 (0:00:00.751) 0:04:06.373 ********** 2026-03-27 00:56:02.617266 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617270 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617274 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617277 | orchestrator | 2026-03-27 00:56:02.617281 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-27 00:56:02.617285 | orchestrator | Friday 27 March 2026 00:49:37 +0000 (0:00:00.355) 0:04:06.729 ********** 2026-03-27 00:56:02.617289 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.617292 | orchestrator | 2026-03-27 00:56:02.617296 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-27 00:56:02.617300 | orchestrator | Friday 27 March 2026 00:49:38 +0000 (0:00:00.532) 0:04:07.262 ********** 2026-03-27 00:56:02.617304 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617308 | orchestrator | 2026-03-27 00:56:02.617322 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-27 00:56:02.617327 | orchestrator | Friday 27 March 2026 00:49:38 +0000 (0:00:00.330) 0:04:07.592 ********** 2026-03-27 00:56:02.617331 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-27 00:56:02.617335 | orchestrator | 2026-03-27 00:56:02.617338 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-27 00:56:02.617342 | orchestrator | Friday 27 March 2026 00:49:39 +0000 (0:00:01.010) 0:04:08.603 ********** 2026-03-27 00:56:02.617346 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617349 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617353 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617357 | orchestrator | 2026-03-27 00:56:02.617360 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-27 00:56:02.617364 | orchestrator | Friday 27 March 2026 00:49:40 +0000 (0:00:00.388) 0:04:08.991 ********** 2026-03-27 00:56:02.617368 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617372 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617375 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617379 | orchestrator | 2026-03-27 00:56:02.617383 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-27 00:56:02.617386 | orchestrator | Friday 27 March 2026 00:49:40 +0000 (0:00:00.326) 0:04:09.318 ********** 2026-03-27 00:56:02.617392 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617396 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617399 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617403 | orchestrator | 2026-03-27 00:56:02.617407 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-27 00:56:02.617413 | orchestrator | Friday 27 March 2026 00:49:41 +0000 (0:00:01.389) 0:04:10.708 ********** 2026-03-27 00:56:02.617417 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617420 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617424 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617428 | orchestrator | 2026-03-27 00:56:02.617432 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-27 00:56:02.617435 | orchestrator | Friday 27 March 2026 00:49:42 +0000 (0:00:01.017) 0:04:11.725 ********** 2026-03-27 00:56:02.617439 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617443 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617446 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617450 | orchestrator | 2026-03-27 00:56:02.617454 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-27 00:56:02.617457 | orchestrator | Friday 27 March 2026 00:49:43 +0000 (0:00:00.620) 0:04:12.345 ********** 2026-03-27 00:56:02.617461 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617465 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617469 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617472 | orchestrator | 2026-03-27 00:56:02.617476 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-27 00:56:02.617480 | orchestrator | Friday 27 March 2026 00:49:44 +0000 (0:00:00.661) 0:04:13.007 ********** 2026-03-27 00:56:02.617483 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617487 | orchestrator | 2026-03-27 00:56:02.617491 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-27 00:56:02.617494 | orchestrator | Friday 27 March 2026 00:49:45 +0000 (0:00:01.267) 0:04:14.275 ********** 2026-03-27 00:56:02.617498 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617502 | orchestrator | 2026-03-27 00:56:02.617506 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-27 00:56:02.617509 | orchestrator | Friday 27 March 2026 00:49:45 +0000 (0:00:00.554) 0:04:14.830 ********** 2026-03-27 00:56:02.617513 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-27 00:56:02.617517 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.617520 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.617524 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-27 00:56:02.617528 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-27 00:56:02.617532 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-27 00:56:02.617535 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-27 00:56:02.617539 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-27 00:56:02.617543 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-27 00:56:02.617546 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-27 00:56:02.617550 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-27 00:56:02.617554 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-27 00:56:02.617558 | orchestrator | 2026-03-27 00:56:02.617561 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-27 00:56:02.617565 | orchestrator | Friday 27 March 2026 00:49:49 +0000 (0:00:03.901) 0:04:18.732 ********** 2026-03-27 00:56:02.617569 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617572 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617576 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617580 | orchestrator | 2026-03-27 00:56:02.617583 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-27 00:56:02.617587 | orchestrator | Friday 27 March 2026 00:49:51 +0000 (0:00:01.908) 0:04:20.640 ********** 2026-03-27 00:56:02.617591 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617595 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617598 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617604 | orchestrator | 2026-03-27 00:56:02.617608 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-27 00:56:02.617611 | orchestrator | Friday 27 March 2026 00:49:51 +0000 (0:00:00.320) 0:04:20.961 ********** 2026-03-27 00:56:02.617615 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617619 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617622 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617626 | orchestrator | 2026-03-27 00:56:02.617630 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-27 00:56:02.617634 | orchestrator | Friday 27 March 2026 00:49:52 +0000 (0:00:00.371) 0:04:21.333 ********** 2026-03-27 00:56:02.617638 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617659 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617664 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617667 | orchestrator | 2026-03-27 00:56:02.617671 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-27 00:56:02.617675 | orchestrator | Friday 27 March 2026 00:49:54 +0000 (0:00:01.768) 0:04:23.102 ********** 2026-03-27 00:56:02.617678 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617682 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617686 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617690 | orchestrator | 2026-03-27 00:56:02.617693 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-27 00:56:02.617697 | orchestrator | Friday 27 March 2026 00:49:55 +0000 (0:00:01.218) 0:04:24.321 ********** 2026-03-27 00:56:02.617701 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617705 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.617708 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.617712 | orchestrator | 2026-03-27 00:56:02.617716 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-27 00:56:02.617719 | orchestrator | Friday 27 March 2026 00:49:55 +0000 (0:00:00.286) 0:04:24.607 ********** 2026-03-27 00:56:02.617725 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.617729 | orchestrator | 2026-03-27 00:56:02.617733 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-27 00:56:02.617737 | orchestrator | Friday 27 March 2026 00:49:56 +0000 (0:00:00.504) 0:04:25.111 ********** 2026-03-27 00:56:02.617740 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617744 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.617748 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.617751 | orchestrator | 2026-03-27 00:56:02.617755 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-27 00:56:02.617759 | orchestrator | Friday 27 March 2026 00:49:56 +0000 (0:00:00.592) 0:04:25.703 ********** 2026-03-27 00:56:02.617763 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.617766 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.617770 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.617774 | orchestrator | 2026-03-27 00:56:02.617777 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-27 00:56:02.617781 | orchestrator | Friday 27 March 2026 00:49:57 +0000 (0:00:00.325) 0:04:26.029 ********** 2026-03-27 00:56:02.617785 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.617789 | orchestrator | 2026-03-27 00:56:02.617792 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-27 00:56:02.617796 | orchestrator | Friday 27 March 2026 00:49:57 +0000 (0:00:00.647) 0:04:26.677 ********** 2026-03-27 00:56:02.617800 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617804 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617807 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617811 | orchestrator | 2026-03-27 00:56:02.617815 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-27 00:56:02.617821 | orchestrator | Friday 27 March 2026 00:49:59 +0000 (0:00:02.258) 0:04:28.935 ********** 2026-03-27 00:56:02.617825 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617828 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617832 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617836 | orchestrator | 2026-03-27 00:56:02.617839 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-27 00:56:02.617843 | orchestrator | Friday 27 March 2026 00:50:01 +0000 (0:00:01.420) 0:04:30.356 ********** 2026-03-27 00:56:02.617847 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617856 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617860 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617864 | orchestrator | 2026-03-27 00:56:02.617868 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-27 00:56:02.617877 | orchestrator | Friday 27 March 2026 00:50:03 +0000 (0:00:01.868) 0:04:32.225 ********** 2026-03-27 00:56:02.617905 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.617909 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.617918 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.617922 | orchestrator | 2026-03-27 00:56:02.617926 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-27 00:56:02.617930 | orchestrator | Friday 27 March 2026 00:50:04 +0000 (0:00:01.662) 0:04:33.888 ********** 2026-03-27 00:56:02.617939 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.617943 | orchestrator | 2026-03-27 00:56:02.617946 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-27 00:56:02.617950 | orchestrator | Friday 27 March 2026 00:50:05 +0000 (0:00:00.820) 0:04:34.708 ********** 2026-03-27 00:56:02.617959 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-27 00:56:02.617963 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.617967 | orchestrator | 2026-03-27 00:56:02.617970 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-27 00:56:02.617980 | orchestrator | Friday 27 March 2026 00:50:27 +0000 (0:00:21.572) 0:04:56.280 ********** 2026-03-27 00:56:02.617984 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.617988 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.617991 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618000 | orchestrator | 2026-03-27 00:56:02.618004 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-27 00:56:02.618008 | orchestrator | Friday 27 March 2026 00:50:33 +0000 (0:00:05.910) 0:05:02.191 ********** 2026-03-27 00:56:02.618051 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618064 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618068 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618072 | orchestrator | 2026-03-27 00:56:02.618081 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-27 00:56:02.618122 | orchestrator | Friday 27 March 2026 00:50:33 +0000 (0:00:00.269) 0:05:02.460 ********** 2026-03-27 00:56:02.618128 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__09fad2df142ce62413465efb62f460c1bf24419f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-27 00:56:02.618133 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__09fad2df142ce62413465efb62f460c1bf24419f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-27 00:56:02.618146 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__09fad2df142ce62413465efb62f460c1bf24419f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-27 00:56:02.618156 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__09fad2df142ce62413465efb62f460c1bf24419f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-27 00:56:02.618160 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__09fad2df142ce62413465efb62f460c1bf24419f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-27 00:56:02.618164 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__09fad2df142ce62413465efb62f460c1bf24419f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__09fad2df142ce62413465efb62f460c1bf24419f'}])  2026-03-27 00:56:02.618169 | orchestrator | 2026-03-27 00:56:02.618172 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-27 00:56:02.618176 | orchestrator | Friday 27 March 2026 00:50:43 +0000 (0:00:10.055) 0:05:12.515 ********** 2026-03-27 00:56:02.618180 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618184 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618187 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618191 | orchestrator | 2026-03-27 00:56:02.618195 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-27 00:56:02.618199 | orchestrator | Friday 27 March 2026 00:50:43 +0000 (0:00:00.309) 0:05:12.825 ********** 2026-03-27 00:56:02.618202 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.618206 | orchestrator | 2026-03-27 00:56:02.618210 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-27 00:56:02.618214 | orchestrator | Friday 27 March 2026 00:50:44 +0000 (0:00:00.468) 0:05:13.293 ********** 2026-03-27 00:56:02.618217 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618221 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618225 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618228 | orchestrator | 2026-03-27 00:56:02.618232 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-27 00:56:02.618236 | orchestrator | Friday 27 March 2026 00:50:44 +0000 (0:00:00.470) 0:05:13.764 ********** 2026-03-27 00:56:02.618240 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618243 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618247 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618251 | orchestrator | 2026-03-27 00:56:02.618254 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-27 00:56:02.618258 | orchestrator | Friday 27 March 2026 00:50:45 +0000 (0:00:00.286) 0:05:14.050 ********** 2026-03-27 00:56:02.618262 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-27 00:56:02.618266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-27 00:56:02.618269 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-27 00:56:02.618273 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618277 | orchestrator | 2026-03-27 00:56:02.618281 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-27 00:56:02.618284 | orchestrator | Friday 27 March 2026 00:50:45 +0000 (0:00:00.552) 0:05:14.603 ********** 2026-03-27 00:56:02.618290 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618294 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618310 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618314 | orchestrator | 2026-03-27 00:56:02.618318 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-27 00:56:02.618322 | orchestrator | 2026-03-27 00:56:02.618326 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-27 00:56:02.618329 | orchestrator | Friday 27 March 2026 00:50:46 +0000 (0:00:00.699) 0:05:15.302 ********** 2026-03-27 00:56:02.618333 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.618337 | orchestrator | 2026-03-27 00:56:02.618340 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-27 00:56:02.618344 | orchestrator | Friday 27 March 2026 00:50:46 +0000 (0:00:00.449) 0:05:15.752 ********** 2026-03-27 00:56:02.618350 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.618356 | orchestrator | 2026-03-27 00:56:02.618362 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-27 00:56:02.618369 | orchestrator | Friday 27 March 2026 00:50:47 +0000 (0:00:00.451) 0:05:16.203 ********** 2026-03-27 00:56:02.618380 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618386 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618391 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618398 | orchestrator | 2026-03-27 00:56:02.618404 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-27 00:56:02.618410 | orchestrator | Friday 27 March 2026 00:50:48 +0000 (0:00:00.882) 0:05:17.086 ********** 2026-03-27 00:56:02.618417 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618421 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618425 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618428 | orchestrator | 2026-03-27 00:56:02.618432 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-27 00:56:02.618436 | orchestrator | Friday 27 March 2026 00:50:48 +0000 (0:00:00.236) 0:05:17.322 ********** 2026-03-27 00:56:02.618439 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618443 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618447 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618450 | orchestrator | 2026-03-27 00:56:02.618454 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-27 00:56:02.618458 | orchestrator | Friday 27 March 2026 00:50:48 +0000 (0:00:00.230) 0:05:17.552 ********** 2026-03-27 00:56:02.618461 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618465 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618469 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618472 | orchestrator | 2026-03-27 00:56:02.618476 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-27 00:56:02.618480 | orchestrator | Friday 27 March 2026 00:50:48 +0000 (0:00:00.219) 0:05:17.772 ********** 2026-03-27 00:56:02.618483 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618487 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618491 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618494 | orchestrator | 2026-03-27 00:56:02.618498 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-27 00:56:02.618502 | orchestrator | Friday 27 March 2026 00:50:49 +0000 (0:00:00.865) 0:05:18.637 ********** 2026-03-27 00:56:02.618506 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618509 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618513 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618517 | orchestrator | 2026-03-27 00:56:02.618520 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-27 00:56:02.618524 | orchestrator | Friday 27 March 2026 00:50:49 +0000 (0:00:00.270) 0:05:18.908 ********** 2026-03-27 00:56:02.618531 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618534 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618538 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618542 | orchestrator | 2026-03-27 00:56:02.618545 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-27 00:56:02.618549 | orchestrator | Friday 27 March 2026 00:50:50 +0000 (0:00:00.277) 0:05:19.186 ********** 2026-03-27 00:56:02.618553 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618556 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618560 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618564 | orchestrator | 2026-03-27 00:56:02.618567 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-27 00:56:02.618571 | orchestrator | Friday 27 March 2026 00:50:50 +0000 (0:00:00.710) 0:05:19.896 ********** 2026-03-27 00:56:02.618575 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618579 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618582 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618586 | orchestrator | 2026-03-27 00:56:02.618589 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-27 00:56:02.618593 | orchestrator | Friday 27 March 2026 00:50:52 +0000 (0:00:01.151) 0:05:21.048 ********** 2026-03-27 00:56:02.618597 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618601 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618604 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618608 | orchestrator | 2026-03-27 00:56:02.618612 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-27 00:56:02.618615 | orchestrator | Friday 27 March 2026 00:50:52 +0000 (0:00:00.306) 0:05:21.354 ********** 2026-03-27 00:56:02.618619 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618623 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618626 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618630 | orchestrator | 2026-03-27 00:56:02.618634 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-27 00:56:02.618637 | orchestrator | Friday 27 March 2026 00:50:52 +0000 (0:00:00.375) 0:05:21.730 ********** 2026-03-27 00:56:02.618641 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618645 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618648 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618652 | orchestrator | 2026-03-27 00:56:02.618656 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-27 00:56:02.618671 | orchestrator | Friday 27 March 2026 00:50:53 +0000 (0:00:00.277) 0:05:22.007 ********** 2026-03-27 00:56:02.618676 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618680 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618683 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618687 | orchestrator | 2026-03-27 00:56:02.618691 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-27 00:56:02.618695 | orchestrator | Friday 27 March 2026 00:50:53 +0000 (0:00:00.522) 0:05:22.529 ********** 2026-03-27 00:56:02.618698 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618702 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618706 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618709 | orchestrator | 2026-03-27 00:56:02.618713 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-27 00:56:02.618717 | orchestrator | Friday 27 March 2026 00:50:53 +0000 (0:00:00.274) 0:05:22.804 ********** 2026-03-27 00:56:02.618721 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618724 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618728 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618732 | orchestrator | 2026-03-27 00:56:02.618736 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-27 00:56:02.618739 | orchestrator | Friday 27 March 2026 00:50:54 +0000 (0:00:00.292) 0:05:23.097 ********** 2026-03-27 00:56:02.618743 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618751 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618755 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618759 | orchestrator | 2026-03-27 00:56:02.618762 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-27 00:56:02.618766 | orchestrator | Friday 27 March 2026 00:50:54 +0000 (0:00:00.260) 0:05:23.358 ********** 2026-03-27 00:56:02.618770 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618774 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618777 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618781 | orchestrator | 2026-03-27 00:56:02.618785 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-27 00:56:02.618788 | orchestrator | Friday 27 March 2026 00:50:54 +0000 (0:00:00.268) 0:05:23.626 ********** 2026-03-27 00:56:02.618792 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618796 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618800 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618803 | orchestrator | 2026-03-27 00:56:02.618807 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-27 00:56:02.618811 | orchestrator | Friday 27 March 2026 00:50:55 +0000 (0:00:00.505) 0:05:24.132 ********** 2026-03-27 00:56:02.618814 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618818 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618822 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618825 | orchestrator | 2026-03-27 00:56:02.618829 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-27 00:56:02.618833 | orchestrator | Friday 27 March 2026 00:50:55 +0000 (0:00:00.476) 0:05:24.609 ********** 2026-03-27 00:56:02.618837 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-27 00:56:02.618841 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:56:02.618844 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:56:02.618848 | orchestrator | 2026-03-27 00:56:02.618852 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-27 00:56:02.618856 | orchestrator | Friday 27 March 2026 00:50:56 +0000 (0:00:00.767) 0:05:25.376 ********** 2026-03-27 00:56:02.618859 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-27 00:56:02.618863 | orchestrator | 2026-03-27 00:56:02.618867 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-27 00:56:02.618870 | orchestrator | Friday 27 March 2026 00:50:57 +0000 (0:00:00.768) 0:05:26.145 ********** 2026-03-27 00:56:02.618874 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.618878 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.618891 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.618894 | orchestrator | 2026-03-27 00:56:02.618898 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-27 00:56:02.618902 | orchestrator | Friday 27 March 2026 00:50:57 +0000 (0:00:00.722) 0:05:26.867 ********** 2026-03-27 00:56:02.618906 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.618909 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.618913 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.618917 | orchestrator | 2026-03-27 00:56:02.618920 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-27 00:56:02.618924 | orchestrator | Friday 27 March 2026 00:50:58 +0000 (0:00:00.273) 0:05:27.141 ********** 2026-03-27 00:56:02.618928 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-27 00:56:02.618932 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-27 00:56:02.618935 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-27 00:56:02.618939 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-27 00:56:02.618943 | orchestrator | 2026-03-27 00:56:02.618946 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-27 00:56:02.618950 | orchestrator | Friday 27 March 2026 00:51:05 +0000 (0:00:07.561) 0:05:34.702 ********** 2026-03-27 00:56:02.618956 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.618960 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.618964 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.618967 | orchestrator | 2026-03-27 00:56:02.618971 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-27 00:56:02.618975 | orchestrator | Friday 27 March 2026 00:51:06 +0000 (0:00:00.503) 0:05:35.206 ********** 2026-03-27 00:56:02.618978 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-27 00:56:02.618982 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-27 00:56:02.618986 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-27 00:56:02.618989 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-27 00:56:02.618993 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.619008 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.619013 | orchestrator | 2026-03-27 00:56:02.619016 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-27 00:56:02.619020 | orchestrator | Friday 27 March 2026 00:51:07 +0000 (0:00:01.702) 0:05:36.909 ********** 2026-03-27 00:56:02.619024 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-27 00:56:02.619027 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-27 00:56:02.619031 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-27 00:56:02.619035 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-27 00:56:02.619038 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-27 00:56:02.619042 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-27 00:56:02.619046 | orchestrator | 2026-03-27 00:56:02.619050 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-27 00:56:02.619053 | orchestrator | Friday 27 March 2026 00:51:09 +0000 (0:00:01.171) 0:05:38.080 ********** 2026-03-27 00:56:02.619057 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.619061 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.619064 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.619068 | orchestrator | 2026-03-27 00:56:02.619072 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-27 00:56:02.619077 | orchestrator | Friday 27 March 2026 00:51:09 +0000 (0:00:00.756) 0:05:38.837 ********** 2026-03-27 00:56:02.619081 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.619085 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.619088 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.619092 | orchestrator | 2026-03-27 00:56:02.619096 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-27 00:56:02.619100 | orchestrator | Friday 27 March 2026 00:51:10 +0000 (0:00:00.268) 0:05:39.105 ********** 2026-03-27 00:56:02.619103 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.619107 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.619111 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.619114 | orchestrator | 2026-03-27 00:56:02.619118 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-27 00:56:02.619122 | orchestrator | Friday 27 March 2026 00:51:10 +0000 (0:00:00.470) 0:05:39.576 ********** 2026-03-27 00:56:02.619125 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.619129 | orchestrator | 2026-03-27 00:56:02.619133 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-27 00:56:02.619136 | orchestrator | Friday 27 March 2026 00:51:11 +0000 (0:00:00.456) 0:05:40.033 ********** 2026-03-27 00:56:02.619140 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.619144 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.619148 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.619151 | orchestrator | 2026-03-27 00:56:02.619155 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-27 00:56:02.619161 | orchestrator | Friday 27 March 2026 00:51:11 +0000 (0:00:00.264) 0:05:40.298 ********** 2026-03-27 00:56:02.619165 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.619169 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.619172 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.619176 | orchestrator | 2026-03-27 00:56:02.619180 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-27 00:56:02.619184 | orchestrator | Friday 27 March 2026 00:51:11 +0000 (0:00:00.441) 0:05:40.739 ********** 2026-03-27 00:56:02.619187 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.619191 | orchestrator | 2026-03-27 00:56:02.619195 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-27 00:56:02.619198 | orchestrator | Friday 27 March 2026 00:51:12 +0000 (0:00:00.446) 0:05:41.185 ********** 2026-03-27 00:56:02.619202 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.619206 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.619209 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.619213 | orchestrator | 2026-03-27 00:56:02.619217 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-27 00:56:02.619220 | orchestrator | Friday 27 March 2026 00:51:13 +0000 (0:00:01.257) 0:05:42.443 ********** 2026-03-27 00:56:02.619224 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.619228 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.619231 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.619235 | orchestrator | 2026-03-27 00:56:02.619239 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-27 00:56:02.619242 | orchestrator | Friday 27 March 2026 00:51:15 +0000 (0:00:01.659) 0:05:44.103 ********** 2026-03-27 00:56:02.619246 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.619250 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.619253 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.619257 | orchestrator | 2026-03-27 00:56:02.619261 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-27 00:56:02.619264 | orchestrator | Friday 27 March 2026 00:51:17 +0000 (0:00:01.877) 0:05:45.980 ********** 2026-03-27 00:56:02.619268 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.619272 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.619275 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.619279 | orchestrator | 2026-03-27 00:56:02.619283 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-27 00:56:02.619286 | orchestrator | Friday 27 March 2026 00:51:19 +0000 (0:00:02.135) 0:05:48.116 ********** 2026-03-27 00:56:02.619290 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.619294 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.619297 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-27 00:56:02.619301 | orchestrator | 2026-03-27 00:56:02.619305 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-27 00:56:02.619308 | orchestrator | Friday 27 March 2026 00:51:19 +0000 (0:00:00.408) 0:05:48.525 ********** 2026-03-27 00:56:02.619323 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-27 00:56:02.619327 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-27 00:56:02.619331 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.619335 | orchestrator | 2026-03-27 00:56:02.619339 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-27 00:56:02.619342 | orchestrator | Friday 27 March 2026 00:51:33 +0000 (0:00:13.589) 0:06:02.114 ********** 2026-03-27 00:56:02.619346 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.619350 | orchestrator | 2026-03-27 00:56:02.619353 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-27 00:56:02.619360 | orchestrator | Friday 27 March 2026 00:51:34 +0000 (0:00:01.406) 0:06:03.520 ********** 2026-03-27 00:56:02.619363 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.619367 | orchestrator | 2026-03-27 00:56:02.619371 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-27 00:56:02.619374 | orchestrator | Friday 27 March 2026 00:51:34 +0000 (0:00:00.353) 0:06:03.874 ********** 2026-03-27 00:56:02.619378 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.619382 | orchestrator | 2026-03-27 00:56:02.619387 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-27 00:56:02.619391 | orchestrator | Friday 27 March 2026 00:51:35 +0000 (0:00:00.146) 0:06:04.021 ********** 2026-03-27 00:56:02.619395 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-27 00:56:02.619399 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-27 00:56:02.619402 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-27 00:56:02.619406 | orchestrator | 2026-03-27 00:56:02.619410 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-27 00:56:02.619413 | orchestrator | Friday 27 March 2026 00:51:41 +0000 (0:00:06.012) 0:06:10.033 ********** 2026-03-27 00:56:02.619417 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-27 00:56:02.619421 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-27 00:56:02.619424 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-27 00:56:02.619428 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-27 00:56:02.619432 | orchestrator | 2026-03-27 00:56:02.619435 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-27 00:56:02.619439 | orchestrator | Friday 27 March 2026 00:51:45 +0000 (0:00:04.645) 0:06:14.679 ********** 2026-03-27 00:56:02.619443 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.619446 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.619450 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.619454 | orchestrator | 2026-03-27 00:56:02.619457 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-27 00:56:02.619461 | orchestrator | Friday 27 March 2026 00:51:46 +0000 (0:00:01.114) 0:06:15.793 ********** 2026-03-27 00:56:02.619465 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.619469 | orchestrator | 2026-03-27 00:56:02.619472 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-27 00:56:02.619476 | orchestrator | Friday 27 March 2026 00:51:47 +0000 (0:00:00.453) 0:06:16.247 ********** 2026-03-27 00:56:02.619480 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.619483 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.619487 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.619491 | orchestrator | 2026-03-27 00:56:02.619494 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-27 00:56:02.619498 | orchestrator | Friday 27 March 2026 00:51:47 +0000 (0:00:00.274) 0:06:16.521 ********** 2026-03-27 00:56:02.619502 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.619505 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.619509 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.619513 | orchestrator | 2026-03-27 00:56:02.619516 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-27 00:56:02.619520 | orchestrator | Friday 27 March 2026 00:51:48 +0000 (0:00:01.368) 0:06:17.889 ********** 2026-03-27 00:56:02.619524 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-27 00:56:02.619528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-27 00:56:02.619531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-27 00:56:02.619535 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.619541 | orchestrator | 2026-03-27 00:56:02.619545 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-27 00:56:02.619548 | orchestrator | Friday 27 March 2026 00:51:49 +0000 (0:00:00.573) 0:06:18.462 ********** 2026-03-27 00:56:02.619552 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.619556 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.619559 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.619563 | orchestrator | 2026-03-27 00:56:02.619567 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-27 00:56:02.619570 | orchestrator | 2026-03-27 00:56:02.619574 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-27 00:56:02.619578 | orchestrator | Friday 27 March 2026 00:51:50 +0000 (0:00:00.542) 0:06:19.005 ********** 2026-03-27 00:56:02.619582 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.619585 | orchestrator | 2026-03-27 00:56:02.619589 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-27 00:56:02.619593 | orchestrator | Friday 27 March 2026 00:51:50 +0000 (0:00:00.623) 0:06:19.629 ********** 2026-03-27 00:56:02.619609 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.619613 | orchestrator | 2026-03-27 00:56:02.619617 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-27 00:56:02.619621 | orchestrator | Friday 27 March 2026 00:51:51 +0000 (0:00:00.453) 0:06:20.082 ********** 2026-03-27 00:56:02.619624 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.619628 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.619632 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.619635 | orchestrator | 2026-03-27 00:56:02.619639 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-27 00:56:02.619643 | orchestrator | Friday 27 March 2026 00:51:51 +0000 (0:00:00.249) 0:06:20.332 ********** 2026-03-27 00:56:02.619646 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.619650 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.619654 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.619657 | orchestrator | 2026-03-27 00:56:02.619661 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-27 00:56:02.619665 | orchestrator | Friday 27 March 2026 00:51:52 +0000 (0:00:00.969) 0:06:21.301 ********** 2026-03-27 00:56:02.619669 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.619672 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.619676 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.619680 | orchestrator | 2026-03-27 00:56:02.619685 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-27 00:56:02.619689 | orchestrator | Friday 27 March 2026 00:51:53 +0000 (0:00:00.749) 0:06:22.051 ********** 2026-03-27 00:56:02.619693 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.619696 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.619700 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.619704 | orchestrator | 2026-03-27 00:56:02.619707 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-27 00:56:02.619711 | orchestrator | Friday 27 March 2026 00:51:53 +0000 (0:00:00.817) 0:06:22.868 ********** 2026-03-27 00:56:02.619715 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.619719 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.619722 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.619726 | orchestrator | 2026-03-27 00:56:02.619730 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-27 00:56:02.619733 | orchestrator | Friday 27 March 2026 00:51:54 +0000 (0:00:00.323) 0:06:23.192 ********** 2026-03-27 00:56:02.619737 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.619741 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.619744 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.619748 | orchestrator | 2026-03-27 00:56:02.619755 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-27 00:56:02.619759 | orchestrator | Friday 27 March 2026 00:51:54 +0000 (0:00:00.643) 0:06:23.836 ********** 2026-03-27 00:56:02.619762 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.619766 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.619770 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.619773 | orchestrator | 2026-03-27 00:56:02.619777 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-27 00:56:02.619781 | orchestrator | Friday 27 March 2026 00:51:55 +0000 (0:00:00.343) 0:06:24.179 ********** 2026-03-27 00:56:02.619785 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.619788 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.619792 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.619796 | orchestrator | 2026-03-27 00:56:02.619799 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-27 00:56:02.619803 | orchestrator | Friday 27 March 2026 00:51:55 +0000 (0:00:00.731) 0:06:24.910 ********** 2026-03-27 00:56:02.619807 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.619810 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.619814 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.619818 | orchestrator | 2026-03-27 00:56:02.619821 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-27 00:56:02.619825 | orchestrator | Friday 27 March 2026 00:51:56 +0000 (0:00:00.775) 0:06:25.686 ********** 2026-03-27 00:56:02.619829 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.619832 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.619836 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.619840 | orchestrator | 2026-03-27 00:56:02.619844 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-27 00:56:02.619847 | orchestrator | Friday 27 March 2026 00:51:57 +0000 (0:00:00.594) 0:06:26.281 ********** 2026-03-27 00:56:02.619851 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.619855 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.619858 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.619862 | orchestrator | 2026-03-27 00:56:02.619866 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-27 00:56:02.619869 | orchestrator | Friday 27 March 2026 00:51:57 +0000 (0:00:00.336) 0:06:26.617 ********** 2026-03-27 00:56:02.619873 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.619877 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.619904 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.619908 | orchestrator | 2026-03-27 00:56:02.619912 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-27 00:56:02.619915 | orchestrator | Friday 27 March 2026 00:51:57 +0000 (0:00:00.361) 0:06:26.979 ********** 2026-03-27 00:56:02.619919 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.619923 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.619927 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.619930 | orchestrator | 2026-03-27 00:56:02.619934 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-27 00:56:02.619938 | orchestrator | Friday 27 March 2026 00:51:58 +0000 (0:00:00.341) 0:06:27.320 ********** 2026-03-27 00:56:02.619942 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.619945 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.619949 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.619953 | orchestrator | 2026-03-27 00:56:02.619957 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-27 00:56:02.619960 | orchestrator | Friday 27 March 2026 00:51:59 +0000 (0:00:00.693) 0:06:28.013 ********** 2026-03-27 00:56:02.619964 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.619968 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.619972 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.619975 | orchestrator | 2026-03-27 00:56:02.619981 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-27 00:56:02.619988 | orchestrator | Friday 27 March 2026 00:51:59 +0000 (0:00:00.318) 0:06:28.332 ********** 2026-03-27 00:56:02.619992 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.619996 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.619999 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620003 | orchestrator | 2026-03-27 00:56:02.620007 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-27 00:56:02.620011 | orchestrator | Friday 27 March 2026 00:51:59 +0000 (0:00:00.316) 0:06:28.648 ********** 2026-03-27 00:56:02.620014 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620018 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620022 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620026 | orchestrator | 2026-03-27 00:56:02.620029 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-27 00:56:02.620033 | orchestrator | Friday 27 March 2026 00:51:59 +0000 (0:00:00.322) 0:06:28.970 ********** 2026-03-27 00:56:02.620037 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.620040 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.620044 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.620048 | orchestrator | 2026-03-27 00:56:02.620052 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-27 00:56:02.620057 | orchestrator | Friday 27 March 2026 00:52:00 +0000 (0:00:00.648) 0:06:29.619 ********** 2026-03-27 00:56:02.620061 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.620065 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.620068 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.620072 | orchestrator | 2026-03-27 00:56:02.620076 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-27 00:56:02.620080 | orchestrator | Friday 27 March 2026 00:52:01 +0000 (0:00:00.561) 0:06:30.181 ********** 2026-03-27 00:56:02.620083 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.620087 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.620091 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.620094 | orchestrator | 2026-03-27 00:56:02.620098 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-27 00:56:02.620102 | orchestrator | Friday 27 March 2026 00:52:01 +0000 (0:00:00.317) 0:06:30.499 ********** 2026-03-27 00:56:02.620106 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-27 00:56:02.620110 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:56:02.620113 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:56:02.620117 | orchestrator | 2026-03-27 00:56:02.620121 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-27 00:56:02.620124 | orchestrator | Friday 27 March 2026 00:52:02 +0000 (0:00:01.064) 0:06:31.563 ********** 2026-03-27 00:56:02.620128 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.620132 | orchestrator | 2026-03-27 00:56:02.620136 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-27 00:56:02.620139 | orchestrator | Friday 27 March 2026 00:52:03 +0000 (0:00:00.859) 0:06:32.423 ********** 2026-03-27 00:56:02.620143 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620147 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620150 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620154 | orchestrator | 2026-03-27 00:56:02.620158 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-27 00:56:02.620162 | orchestrator | Friday 27 March 2026 00:52:03 +0000 (0:00:00.313) 0:06:32.737 ********** 2026-03-27 00:56:02.620165 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620169 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620173 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620177 | orchestrator | 2026-03-27 00:56:02.620180 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-27 00:56:02.620186 | orchestrator | Friday 27 March 2026 00:52:04 +0000 (0:00:00.319) 0:06:33.056 ********** 2026-03-27 00:56:02.620190 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.620194 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.620197 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.620201 | orchestrator | 2026-03-27 00:56:02.620205 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-27 00:56:02.620208 | orchestrator | Friday 27 March 2026 00:52:05 +0000 (0:00:01.118) 0:06:34.175 ********** 2026-03-27 00:56:02.620212 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.620216 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.620220 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.620223 | orchestrator | 2026-03-27 00:56:02.620227 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-27 00:56:02.620231 | orchestrator | Friday 27 March 2026 00:52:05 +0000 (0:00:00.380) 0:06:34.555 ********** 2026-03-27 00:56:02.620234 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-27 00:56:02.620238 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-27 00:56:02.620242 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-27 00:56:02.620246 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-27 00:56:02.620249 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-27 00:56:02.620253 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-27 00:56:02.620257 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-27 00:56:02.620261 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-27 00:56:02.620268 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-27 00:56:02.620272 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-27 00:56:02.620276 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-27 00:56:02.620279 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-27 00:56:02.620283 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-27 00:56:02.620287 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-27 00:56:02.620291 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-27 00:56:02.620294 | orchestrator | 2026-03-27 00:56:02.620298 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-27 00:56:02.620302 | orchestrator | Friday 27 March 2026 00:52:10 +0000 (0:00:05.384) 0:06:39.940 ********** 2026-03-27 00:56:02.620305 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620309 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620313 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620317 | orchestrator | 2026-03-27 00:56:02.620323 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-27 00:56:02.620327 | orchestrator | Friday 27 March 2026 00:52:11 +0000 (0:00:00.322) 0:06:40.263 ********** 2026-03-27 00:56:02.620331 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.620335 | orchestrator | 2026-03-27 00:56:02.620338 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-27 00:56:02.620342 | orchestrator | Friday 27 March 2026 00:52:12 +0000 (0:00:00.862) 0:06:41.125 ********** 2026-03-27 00:56:02.620346 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-27 00:56:02.620350 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-27 00:56:02.620356 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-27 00:56:02.620359 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-27 00:56:02.620363 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-27 00:56:02.620367 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-27 00:56:02.620371 | orchestrator | 2026-03-27 00:56:02.620374 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-27 00:56:02.620378 | orchestrator | Friday 27 March 2026 00:52:13 +0000 (0:00:01.136) 0:06:42.262 ********** 2026-03-27 00:56:02.620382 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.620385 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-27 00:56:02.620389 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-27 00:56:02.620393 | orchestrator | 2026-03-27 00:56:02.620397 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-27 00:56:02.620400 | orchestrator | Friday 27 March 2026 00:52:15 +0000 (0:00:01.778) 0:06:44.040 ********** 2026-03-27 00:56:02.620404 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-27 00:56:02.620408 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-27 00:56:02.620412 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.620415 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-27 00:56:02.620419 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-27 00:56:02.620423 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.620426 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-27 00:56:02.620430 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-27 00:56:02.620434 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.620437 | orchestrator | 2026-03-27 00:56:02.620441 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-27 00:56:02.620445 | orchestrator | Friday 27 March 2026 00:52:16 +0000 (0:00:01.231) 0:06:45.272 ********** 2026-03-27 00:56:02.620449 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.620452 | orchestrator | 2026-03-27 00:56:02.620456 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-27 00:56:02.620460 | orchestrator | Friday 27 March 2026 00:52:19 +0000 (0:00:03.457) 0:06:48.729 ********** 2026-03-27 00:56:02.620463 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.620467 | orchestrator | 2026-03-27 00:56:02.620471 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-27 00:56:02.620475 | orchestrator | Friday 27 March 2026 00:52:20 +0000 (0:00:00.541) 0:06:49.271 ********** 2026-03-27 00:56:02.620478 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f', 'data_vg': 'ceph-b8da8e02-1f61-55dd-bf76-a4ff2d17c49f'}) 2026-03-27 00:56:02.620483 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-49c52ee7-6668-5cd2-bd86-f7267953750e', 'data_vg': 'ceph-49c52ee7-6668-5cd2-bd86-f7267953750e'}) 2026-03-27 00:56:02.620486 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bb6fbf97-7198-5485-83ee-7be3b389ad62', 'data_vg': 'ceph-bb6fbf97-7198-5485-83ee-7be3b389ad62'}) 2026-03-27 00:56:02.620490 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2cf1a901-b2f7-5490-8423-90f944953f5f', 'data_vg': 'ceph-2cf1a901-b2f7-5490-8423-90f944953f5f'}) 2026-03-27 00:56:02.620496 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-627e7bc4-4e7d-5af1-903b-8d115676372d', 'data_vg': 'ceph-627e7bc4-4e7d-5af1-903b-8d115676372d'}) 2026-03-27 00:56:02.620500 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331', 'data_vg': 'ceph-f9aa8e5e-9a1f-5185-aaa5-5b53eb599331'}) 2026-03-27 00:56:02.620504 | orchestrator | 2026-03-27 00:56:02.620508 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-27 00:56:02.620515 | orchestrator | Friday 27 March 2026 00:53:01 +0000 (0:00:40.917) 0:07:30.188 ********** 2026-03-27 00:56:02.620518 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620522 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620526 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620530 | orchestrator | 2026-03-27 00:56:02.620533 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-27 00:56:02.620537 | orchestrator | Friday 27 March 2026 00:53:01 +0000 (0:00:00.692) 0:07:30.881 ********** 2026-03-27 00:56:02.620541 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.620545 | orchestrator | 2026-03-27 00:56:02.620548 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-27 00:56:02.620554 | orchestrator | Friday 27 March 2026 00:53:02 +0000 (0:00:00.556) 0:07:31.438 ********** 2026-03-27 00:56:02.620558 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.620562 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.620565 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.620569 | orchestrator | 2026-03-27 00:56:02.620573 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-27 00:56:02.620577 | orchestrator | Friday 27 March 2026 00:53:03 +0000 (0:00:00.647) 0:07:32.085 ********** 2026-03-27 00:56:02.620580 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.620584 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.620588 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.620591 | orchestrator | 2026-03-27 00:56:02.620595 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-27 00:56:02.620599 | orchestrator | Friday 27 March 2026 00:53:04 +0000 (0:00:01.893) 0:07:33.978 ********** 2026-03-27 00:56:02.620603 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.620606 | orchestrator | 2026-03-27 00:56:02.620610 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-27 00:56:02.620614 | orchestrator | Friday 27 March 2026 00:53:05 +0000 (0:00:00.525) 0:07:34.503 ********** 2026-03-27 00:56:02.620618 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.620621 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.620625 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.620629 | orchestrator | 2026-03-27 00:56:02.620632 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-27 00:56:02.620636 | orchestrator | Friday 27 March 2026 00:53:06 +0000 (0:00:01.329) 0:07:35.833 ********** 2026-03-27 00:56:02.620640 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.620644 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.620647 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.620651 | orchestrator | 2026-03-27 00:56:02.620655 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-27 00:56:02.620659 | orchestrator | Friday 27 March 2026 00:53:08 +0000 (0:00:01.579) 0:07:37.412 ********** 2026-03-27 00:56:02.620662 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.620666 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.620670 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.620674 | orchestrator | 2026-03-27 00:56:02.620677 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-27 00:56:02.620681 | orchestrator | Friday 27 March 2026 00:53:10 +0000 (0:00:02.108) 0:07:39.521 ********** 2026-03-27 00:56:02.620685 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620688 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620692 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620696 | orchestrator | 2026-03-27 00:56:02.620700 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-27 00:56:02.620703 | orchestrator | Friday 27 March 2026 00:53:10 +0000 (0:00:00.284) 0:07:39.806 ********** 2026-03-27 00:56:02.620707 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620713 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620717 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620721 | orchestrator | 2026-03-27 00:56:02.620725 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-27 00:56:02.620728 | orchestrator | Friday 27 March 2026 00:53:11 +0000 (0:00:00.345) 0:07:40.152 ********** 2026-03-27 00:56:02.620732 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-03-27 00:56:02.620736 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-27 00:56:02.620740 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-27 00:56:02.620743 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-27 00:56:02.620747 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-27 00:56:02.620751 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-27 00:56:02.620754 | orchestrator | 2026-03-27 00:56:02.620758 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-27 00:56:02.620762 | orchestrator | Friday 27 March 2026 00:53:12 +0000 (0:00:01.376) 0:07:41.528 ********** 2026-03-27 00:56:02.620766 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-27 00:56:02.620769 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-27 00:56:02.620773 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-27 00:56:02.620777 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-27 00:56:02.620781 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-27 00:56:02.620784 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-27 00:56:02.620788 | orchestrator | 2026-03-27 00:56:02.620792 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-27 00:56:02.620795 | orchestrator | Friday 27 March 2026 00:53:14 +0000 (0:00:02.345) 0:07:43.874 ********** 2026-03-27 00:56:02.620799 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-27 00:56:02.620803 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-27 00:56:02.620809 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-27 00:56:02.620812 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-27 00:56:02.620816 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-27 00:56:02.620820 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-27 00:56:02.620824 | orchestrator | 2026-03-27 00:56:02.620827 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-27 00:56:02.620831 | orchestrator | Friday 27 March 2026 00:53:18 +0000 (0:00:03.839) 0:07:47.714 ********** 2026-03-27 00:56:02.620835 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620839 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620842 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.620846 | orchestrator | 2026-03-27 00:56:02.620850 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-27 00:56:02.620854 | orchestrator | Friday 27 March 2026 00:53:20 +0000 (0:00:02.190) 0:07:49.904 ********** 2026-03-27 00:56:02.620857 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620861 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620865 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-27 00:56:02.620870 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.620874 | orchestrator | 2026-03-27 00:56:02.620878 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-27 00:56:02.620890 | orchestrator | Friday 27 March 2026 00:53:33 +0000 (0:00:12.934) 0:08:02.839 ********** 2026-03-27 00:56:02.620894 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620898 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620901 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620905 | orchestrator | 2026-03-27 00:56:02.620909 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-27 00:56:02.620913 | orchestrator | Friday 27 March 2026 00:53:34 +0000 (0:00:00.825) 0:08:03.665 ********** 2026-03-27 00:56:02.620954 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.620958 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.620961 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.620965 | orchestrator | 2026-03-27 00:56:02.620969 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-27 00:56:02.620973 | orchestrator | Friday 27 March 2026 00:53:35 +0000 (0:00:00.647) 0:08:04.313 ********** 2026-03-27 00:56:02.620976 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.620980 | orchestrator | 2026-03-27 00:56:02.620984 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-27 00:56:02.620988 | orchestrator | Friday 27 March 2026 00:53:35 +0000 (0:00:00.552) 0:08:04.865 ********** 2026-03-27 00:56:02.620991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.620995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.620999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.621003 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621006 | orchestrator | 2026-03-27 00:56:02.621010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-27 00:56:02.621014 | orchestrator | Friday 27 March 2026 00:53:36 +0000 (0:00:00.381) 0:08:05.247 ********** 2026-03-27 00:56:02.621018 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621021 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621025 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621029 | orchestrator | 2026-03-27 00:56:02.621033 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-27 00:56:02.621037 | orchestrator | Friday 27 March 2026 00:53:36 +0000 (0:00:00.319) 0:08:05.566 ********** 2026-03-27 00:56:02.621040 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621044 | orchestrator | 2026-03-27 00:56:02.621048 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-27 00:56:02.621052 | orchestrator | Friday 27 March 2026 00:53:36 +0000 (0:00:00.225) 0:08:05.792 ********** 2026-03-27 00:56:02.621055 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621059 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621063 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621067 | orchestrator | 2026-03-27 00:56:02.621070 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-27 00:56:02.621074 | orchestrator | Friday 27 March 2026 00:53:37 +0000 (0:00:00.572) 0:08:06.365 ********** 2026-03-27 00:56:02.621078 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621082 | orchestrator | 2026-03-27 00:56:02.621085 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-27 00:56:02.621089 | orchestrator | Friday 27 March 2026 00:53:37 +0000 (0:00:00.228) 0:08:06.593 ********** 2026-03-27 00:56:02.621093 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621097 | orchestrator | 2026-03-27 00:56:02.621100 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-27 00:56:02.621104 | orchestrator | Friday 27 March 2026 00:53:37 +0000 (0:00:00.227) 0:08:06.821 ********** 2026-03-27 00:56:02.621108 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621112 | orchestrator | 2026-03-27 00:56:02.621115 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-27 00:56:02.621119 | orchestrator | Friday 27 March 2026 00:53:37 +0000 (0:00:00.144) 0:08:06.965 ********** 2026-03-27 00:56:02.621123 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621127 | orchestrator | 2026-03-27 00:56:02.621130 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-27 00:56:02.621134 | orchestrator | Friday 27 March 2026 00:53:38 +0000 (0:00:00.239) 0:08:07.204 ********** 2026-03-27 00:56:02.621138 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621142 | orchestrator | 2026-03-27 00:56:02.621145 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-27 00:56:02.621152 | orchestrator | Friday 27 March 2026 00:53:38 +0000 (0:00:00.217) 0:08:07.422 ********** 2026-03-27 00:56:02.621158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.621162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.621165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.621169 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621173 | orchestrator | 2026-03-27 00:56:02.621177 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-27 00:56:02.621181 | orchestrator | Friday 27 March 2026 00:53:38 +0000 (0:00:00.390) 0:08:07.812 ********** 2026-03-27 00:56:02.621184 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621188 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621192 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621196 | orchestrator | 2026-03-27 00:56:02.621199 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-27 00:56:02.621203 | orchestrator | Friday 27 March 2026 00:53:39 +0000 (0:00:00.330) 0:08:08.143 ********** 2026-03-27 00:56:02.621207 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621211 | orchestrator | 2026-03-27 00:56:02.621214 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-27 00:56:02.621218 | orchestrator | Friday 27 March 2026 00:53:40 +0000 (0:00:00.974) 0:08:09.117 ********** 2026-03-27 00:56:02.621222 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621225 | orchestrator | 2026-03-27 00:56:02.621231 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-27 00:56:02.621235 | orchestrator | 2026-03-27 00:56:02.621239 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-27 00:56:02.621243 | orchestrator | Friday 27 March 2026 00:53:40 +0000 (0:00:00.665) 0:08:09.782 ********** 2026-03-27 00:56:02.621247 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.621251 | orchestrator | 2026-03-27 00:56:02.621255 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-27 00:56:02.621258 | orchestrator | Friday 27 March 2026 00:53:42 +0000 (0:00:01.241) 0:08:11.023 ********** 2026-03-27 00:56:02.621262 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.621266 | orchestrator | 2026-03-27 00:56:02.621270 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-27 00:56:02.621274 | orchestrator | Friday 27 March 2026 00:53:43 +0000 (0:00:01.248) 0:08:12.272 ********** 2026-03-27 00:56:02.621277 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621281 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621285 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621289 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.621292 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.621296 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.621300 | orchestrator | 2026-03-27 00:56:02.621304 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-27 00:56:02.621307 | orchestrator | Friday 27 March 2026 00:53:44 +0000 (0:00:01.354) 0:08:13.626 ********** 2026-03-27 00:56:02.621311 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621315 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621319 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621322 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621326 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621330 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621334 | orchestrator | 2026-03-27 00:56:02.621337 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-27 00:56:02.621341 | orchestrator | Friday 27 March 2026 00:53:45 +0000 (0:00:00.837) 0:08:14.464 ********** 2026-03-27 00:56:02.621347 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621351 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621355 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621358 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621362 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621366 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621369 | orchestrator | 2026-03-27 00:56:02.621373 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-27 00:56:02.621377 | orchestrator | Friday 27 March 2026 00:53:46 +0000 (0:00:01.099) 0:08:15.563 ********** 2026-03-27 00:56:02.621381 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621384 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621388 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621392 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621395 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621399 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621403 | orchestrator | 2026-03-27 00:56:02.621407 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-27 00:56:02.621410 | orchestrator | Friday 27 March 2026 00:53:47 +0000 (0:00:00.806) 0:08:16.370 ********** 2026-03-27 00:56:02.621414 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621418 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621422 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621425 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.621429 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.621433 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.621436 | orchestrator | 2026-03-27 00:56:02.621440 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-27 00:56:02.621444 | orchestrator | Friday 27 March 2026 00:53:48 +0000 (0:00:01.044) 0:08:17.415 ********** 2026-03-27 00:56:02.621448 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621451 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621455 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621459 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621462 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621466 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621470 | orchestrator | 2026-03-27 00:56:02.621474 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-27 00:56:02.621477 | orchestrator | Friday 27 March 2026 00:53:49 +0000 (0:00:00.949) 0:08:18.364 ********** 2026-03-27 00:56:02.621481 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621487 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621491 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621494 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621498 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621502 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621506 | orchestrator | 2026-03-27 00:56:02.621509 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-27 00:56:02.621513 | orchestrator | Friday 27 March 2026 00:53:50 +0000 (0:00:00.635) 0:08:18.999 ********** 2026-03-27 00:56:02.621517 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621521 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621524 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621528 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.621532 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.621535 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.621539 | orchestrator | 2026-03-27 00:56:02.621543 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-27 00:56:02.621547 | orchestrator | Friday 27 March 2026 00:53:51 +0000 (0:00:01.427) 0:08:20.427 ********** 2026-03-27 00:56:02.621550 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621554 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621558 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621564 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.621568 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.621573 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.621577 | orchestrator | 2026-03-27 00:56:02.621581 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-27 00:56:02.621585 | orchestrator | Friday 27 March 2026 00:53:52 +0000 (0:00:01.010) 0:08:21.437 ********** 2026-03-27 00:56:02.621589 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621592 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621596 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621600 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621604 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621607 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621611 | orchestrator | 2026-03-27 00:56:02.621615 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-27 00:56:02.621618 | orchestrator | Friday 27 March 2026 00:53:53 +0000 (0:00:00.933) 0:08:22.371 ********** 2026-03-27 00:56:02.621622 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621626 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621630 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621633 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.621637 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.621641 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.621645 | orchestrator | 2026-03-27 00:56:02.621648 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-27 00:56:02.621652 | orchestrator | Friday 27 March 2026 00:53:54 +0000 (0:00:00.666) 0:08:23.038 ********** 2026-03-27 00:56:02.621656 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621659 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621663 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621667 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621670 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621674 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621678 | orchestrator | 2026-03-27 00:56:02.621682 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-27 00:56:02.621685 | orchestrator | Friday 27 March 2026 00:53:55 +0000 (0:00:00.957) 0:08:23.995 ********** 2026-03-27 00:56:02.621689 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621693 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621696 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621700 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621704 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621708 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621711 | orchestrator | 2026-03-27 00:56:02.621715 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-27 00:56:02.621719 | orchestrator | Friday 27 March 2026 00:53:55 +0000 (0:00:00.624) 0:08:24.620 ********** 2026-03-27 00:56:02.621723 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621726 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621730 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621734 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621737 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621741 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621745 | orchestrator | 2026-03-27 00:56:02.621749 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-27 00:56:02.621752 | orchestrator | Friday 27 March 2026 00:53:56 +0000 (0:00:01.044) 0:08:25.664 ********** 2026-03-27 00:56:02.621756 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621760 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621764 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621767 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621771 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621775 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621778 | orchestrator | 2026-03-27 00:56:02.621782 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-27 00:56:02.621789 | orchestrator | Friday 27 March 2026 00:53:57 +0000 (0:00:00.620) 0:08:26.285 ********** 2026-03-27 00:56:02.621793 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621796 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621800 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621804 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:02.621807 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:02.621811 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:02.621815 | orchestrator | 2026-03-27 00:56:02.621819 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-27 00:56:02.621822 | orchestrator | Friday 27 March 2026 00:53:58 +0000 (0:00:00.906) 0:08:27.192 ********** 2026-03-27 00:56:02.621826 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.621830 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.621833 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.621837 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.621841 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.621845 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.621848 | orchestrator | 2026-03-27 00:56:02.621852 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-27 00:56:02.621858 | orchestrator | Friday 27 March 2026 00:53:58 +0000 (0:00:00.663) 0:08:27.856 ********** 2026-03-27 00:56:02.621862 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621865 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621869 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621873 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.621876 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.621887 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.621891 | orchestrator | 2026-03-27 00:56:02.621895 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-27 00:56:02.621898 | orchestrator | Friday 27 March 2026 00:53:59 +0000 (0:00:01.078) 0:08:28.934 ********** 2026-03-27 00:56:02.621902 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.621906 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.621909 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.621913 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.621917 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.621921 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.621924 | orchestrator | 2026-03-27 00:56:02.621928 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-27 00:56:02.621932 | orchestrator | Friday 27 March 2026 00:54:01 +0000 (0:00:01.457) 0:08:30.392 ********** 2026-03-27 00:56:02.621936 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.621940 | orchestrator | 2026-03-27 00:56:02.621943 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-27 00:56:02.621949 | orchestrator | Friday 27 March 2026 00:54:04 +0000 (0:00:03.266) 0:08:33.658 ********** 2026-03-27 00:56:02.621953 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.621956 | orchestrator | 2026-03-27 00:56:02.621960 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-27 00:56:02.621964 | orchestrator | Friday 27 March 2026 00:54:06 +0000 (0:00:01.453) 0:08:35.111 ********** 2026-03-27 00:56:02.621968 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.621972 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.621975 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.621979 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.621983 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.621987 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.621990 | orchestrator | 2026-03-27 00:56:02.622001 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-27 00:56:02.622005 | orchestrator | Friday 27 March 2026 00:54:07 +0000 (0:00:01.369) 0:08:36.481 ********** 2026-03-27 00:56:02.622009 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.622035 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.622039 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.622043 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.622047 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.622050 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.622054 | orchestrator | 2026-03-27 00:56:02.622058 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-27 00:56:02.622062 | orchestrator | Friday 27 March 2026 00:54:08 +0000 (0:00:01.175) 0:08:37.657 ********** 2026-03-27 00:56:02.622065 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.622070 | orchestrator | 2026-03-27 00:56:02.622073 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-27 00:56:02.622077 | orchestrator | Friday 27 March 2026 00:54:09 +0000 (0:00:01.109) 0:08:38.766 ********** 2026-03-27 00:56:02.622081 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.622085 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.622088 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.622092 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.622096 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.622099 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.622103 | orchestrator | 2026-03-27 00:56:02.622107 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-27 00:56:02.622110 | orchestrator | Friday 27 March 2026 00:54:11 +0000 (0:00:01.829) 0:08:40.596 ********** 2026-03-27 00:56:02.622114 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.622118 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.622121 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.622125 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.622129 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.622132 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.622136 | orchestrator | 2026-03-27 00:56:02.622140 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-27 00:56:02.622143 | orchestrator | Friday 27 March 2026 00:54:15 +0000 (0:00:04.169) 0:08:44.765 ********** 2026-03-27 00:56:02.622147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:02.622151 | orchestrator | 2026-03-27 00:56:02.622155 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-27 00:56:02.622158 | orchestrator | Friday 27 March 2026 00:54:17 +0000 (0:00:01.394) 0:08:46.160 ********** 2026-03-27 00:56:02.622162 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622166 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622170 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622173 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.622177 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.622181 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.622184 | orchestrator | 2026-03-27 00:56:02.622188 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-27 00:56:02.622192 | orchestrator | Friday 27 March 2026 00:54:17 +0000 (0:00:00.691) 0:08:46.851 ********** 2026-03-27 00:56:02.622196 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.622199 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.622203 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.622207 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:02.622210 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:02.622214 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:02.622218 | orchestrator | 2026-03-27 00:56:02.622221 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-27 00:56:02.622228 | orchestrator | Friday 27 March 2026 00:54:20 +0000 (0:00:03.073) 0:08:49.924 ********** 2026-03-27 00:56:02.622232 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622239 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622243 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622247 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:02.622250 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:02.622254 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:02.622258 | orchestrator | 2026-03-27 00:56:02.622261 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-27 00:56:02.622265 | orchestrator | 2026-03-27 00:56:02.622269 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-27 00:56:02.622273 | orchestrator | Friday 27 March 2026 00:54:21 +0000 (0:00:00.999) 0:08:50.924 ********** 2026-03-27 00:56:02.622276 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.622280 | orchestrator | 2026-03-27 00:56:02.622284 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-27 00:56:02.622287 | orchestrator | Friday 27 March 2026 00:54:22 +0000 (0:00:00.989) 0:08:51.913 ********** 2026-03-27 00:56:02.622294 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.622297 | orchestrator | 2026-03-27 00:56:02.622301 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-27 00:56:02.622305 | orchestrator | Friday 27 March 2026 00:54:23 +0000 (0:00:00.519) 0:08:52.433 ********** 2026-03-27 00:56:02.622309 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622312 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622316 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622320 | orchestrator | 2026-03-27 00:56:02.622323 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-27 00:56:02.622327 | orchestrator | Friday 27 March 2026 00:54:23 +0000 (0:00:00.488) 0:08:52.921 ********** 2026-03-27 00:56:02.622331 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622334 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622338 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622342 | orchestrator | 2026-03-27 00:56:02.622345 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-27 00:56:02.622349 | orchestrator | Friday 27 March 2026 00:54:24 +0000 (0:00:00.826) 0:08:53.748 ********** 2026-03-27 00:56:02.622353 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622356 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622360 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622364 | orchestrator | 2026-03-27 00:56:02.622367 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-27 00:56:02.622371 | orchestrator | Friday 27 March 2026 00:54:25 +0000 (0:00:00.622) 0:08:54.370 ********** 2026-03-27 00:56:02.622375 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622378 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622382 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622386 | orchestrator | 2026-03-27 00:56:02.622390 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-27 00:56:02.622393 | orchestrator | Friday 27 March 2026 00:54:26 +0000 (0:00:00.694) 0:08:55.064 ********** 2026-03-27 00:56:02.622397 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622401 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622404 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622408 | orchestrator | 2026-03-27 00:56:02.622412 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-27 00:56:02.622415 | orchestrator | Friday 27 March 2026 00:54:26 +0000 (0:00:00.657) 0:08:55.721 ********** 2026-03-27 00:56:02.622419 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622423 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622426 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622430 | orchestrator | 2026-03-27 00:56:02.622434 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-27 00:56:02.622438 | orchestrator | Friday 27 March 2026 00:54:27 +0000 (0:00:00.297) 0:08:56.018 ********** 2026-03-27 00:56:02.622444 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622448 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622451 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622455 | orchestrator | 2026-03-27 00:56:02.622459 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-27 00:56:02.622463 | orchestrator | Friday 27 March 2026 00:54:27 +0000 (0:00:00.262) 0:08:56.281 ********** 2026-03-27 00:56:02.622466 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622470 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622474 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622477 | orchestrator | 2026-03-27 00:56:02.622481 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-27 00:56:02.622485 | orchestrator | Friday 27 March 2026 00:54:28 +0000 (0:00:00.836) 0:08:57.117 ********** 2026-03-27 00:56:02.622489 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622492 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622502 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622506 | orchestrator | 2026-03-27 00:56:02.622509 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-27 00:56:02.622513 | orchestrator | Friday 27 March 2026 00:54:28 +0000 (0:00:00.836) 0:08:57.954 ********** 2026-03-27 00:56:02.622523 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622527 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622531 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622540 | orchestrator | 2026-03-27 00:56:02.622544 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-27 00:56:02.622548 | orchestrator | Friday 27 March 2026 00:54:29 +0000 (0:00:00.277) 0:08:58.231 ********** 2026-03-27 00:56:02.622551 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622555 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622565 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622569 | orchestrator | 2026-03-27 00:56:02.622572 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-27 00:56:02.622581 | orchestrator | Friday 27 March 2026 00:54:29 +0000 (0:00:00.367) 0:08:58.599 ********** 2026-03-27 00:56:02.622585 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622589 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622595 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622615 | orchestrator | 2026-03-27 00:56:02.622619 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-27 00:56:02.622628 | orchestrator | Friday 27 March 2026 00:54:30 +0000 (0:00:00.440) 0:08:59.039 ********** 2026-03-27 00:56:02.622632 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622636 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622639 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622643 | orchestrator | 2026-03-27 00:56:02.622647 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-27 00:56:02.622650 | orchestrator | Friday 27 March 2026 00:54:30 +0000 (0:00:00.704) 0:08:59.744 ********** 2026-03-27 00:56:02.622654 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622658 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622661 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622665 | orchestrator | 2026-03-27 00:56:02.622669 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-27 00:56:02.622673 | orchestrator | Friday 27 March 2026 00:54:31 +0000 (0:00:00.285) 0:09:00.029 ********** 2026-03-27 00:56:02.622676 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622680 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622684 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622687 | orchestrator | 2026-03-27 00:56:02.622693 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-27 00:56:02.622697 | orchestrator | Friday 27 March 2026 00:54:31 +0000 (0:00:00.287) 0:09:00.316 ********** 2026-03-27 00:56:02.622701 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622707 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622711 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622714 | orchestrator | 2026-03-27 00:56:02.622718 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-27 00:56:02.622722 | orchestrator | Friday 27 March 2026 00:54:31 +0000 (0:00:00.294) 0:09:00.611 ********** 2026-03-27 00:56:02.622725 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622729 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622733 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622737 | orchestrator | 2026-03-27 00:56:02.622740 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-27 00:56:02.622744 | orchestrator | Friday 27 March 2026 00:54:32 +0000 (0:00:00.448) 0:09:01.059 ********** 2026-03-27 00:56:02.622748 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622751 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622755 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622759 | orchestrator | 2026-03-27 00:56:02.622763 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-27 00:56:02.622766 | orchestrator | Friday 27 March 2026 00:54:32 +0000 (0:00:00.291) 0:09:01.350 ********** 2026-03-27 00:56:02.622770 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.622774 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.622777 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.622781 | orchestrator | 2026-03-27 00:56:02.622785 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-27 00:56:02.622789 | orchestrator | Friday 27 March 2026 00:54:32 +0000 (0:00:00.468) 0:09:01.818 ********** 2026-03-27 00:56:02.622792 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.622796 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.622800 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-27 00:56:02.622803 | orchestrator | 2026-03-27 00:56:02.622807 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-27 00:56:02.622811 | orchestrator | Friday 27 March 2026 00:54:33 +0000 (0:00:00.525) 0:09:02.344 ********** 2026-03-27 00:56:02.622815 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.622818 | orchestrator | 2026-03-27 00:56:02.622822 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-27 00:56:02.622826 | orchestrator | Friday 27 March 2026 00:54:34 +0000 (0:00:01.601) 0:09:03.945 ********** 2026-03-27 00:56:02.622830 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-27 00:56:02.622835 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.622839 | orchestrator | 2026-03-27 00:56:02.622843 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-27 00:56:02.622846 | orchestrator | Friday 27 March 2026 00:54:35 +0000 (0:00:00.226) 0:09:04.172 ********** 2026-03-27 00:56:02.622851 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-27 00:56:02.622858 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-27 00:56:02.622862 | orchestrator | 2026-03-27 00:56:02.622866 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-27 00:56:02.622869 | orchestrator | Friday 27 March 2026 00:54:40 +0000 (0:00:04.955) 0:09:09.127 ********** 2026-03-27 00:56:02.622873 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-27 00:56:02.622899 | orchestrator | 2026-03-27 00:56:02.622904 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-27 00:56:02.622907 | orchestrator | Friday 27 March 2026 00:54:42 +0000 (0:00:02.509) 0:09:11.637 ********** 2026-03-27 00:56:02.622914 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.622918 | orchestrator | 2026-03-27 00:56:02.622921 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-27 00:56:02.622925 | orchestrator | Friday 27 March 2026 00:54:43 +0000 (0:00:00.621) 0:09:12.259 ********** 2026-03-27 00:56:02.622929 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-27 00:56:02.622933 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-27 00:56:02.622936 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-27 00:56:02.622940 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-27 00:56:02.622944 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-27 00:56:02.622947 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-27 00:56:02.622951 | orchestrator | 2026-03-27 00:56:02.622955 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-27 00:56:02.622958 | orchestrator | Friday 27 March 2026 00:54:44 +0000 (0:00:01.168) 0:09:13.427 ********** 2026-03-27 00:56:02.622962 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.622966 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-27 00:56:02.622970 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-27 00:56:02.622973 | orchestrator | 2026-03-27 00:56:02.622977 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-27 00:56:02.622981 | orchestrator | Friday 27 March 2026 00:54:46 +0000 (0:00:01.798) 0:09:15.226 ********** 2026-03-27 00:56:02.622985 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-27 00:56:02.622989 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-27 00:56:02.622993 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.622997 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-27 00:56:02.623000 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-27 00:56:02.623004 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.623008 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-27 00:56:02.623011 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-27 00:56:02.623015 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.623019 | orchestrator | 2026-03-27 00:56:02.623023 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-27 00:56:02.623026 | orchestrator | Friday 27 March 2026 00:54:47 +0000 (0:00:01.258) 0:09:16.484 ********** 2026-03-27 00:56:02.623030 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.623034 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.623037 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.623041 | orchestrator | 2026-03-27 00:56:02.623045 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-27 00:56:02.623049 | orchestrator | Friday 27 March 2026 00:54:49 +0000 (0:00:02.307) 0:09:18.792 ********** 2026-03-27 00:56:02.623052 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623056 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623060 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623063 | orchestrator | 2026-03-27 00:56:02.623067 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-27 00:56:02.623071 | orchestrator | Friday 27 March 2026 00:54:50 +0000 (0:00:00.652) 0:09:19.444 ********** 2026-03-27 00:56:02.623074 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.623078 | orchestrator | 2026-03-27 00:56:02.623084 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-27 00:56:02.623088 | orchestrator | Friday 27 March 2026 00:54:50 +0000 (0:00:00.518) 0:09:19.962 ********** 2026-03-27 00:56:02.623092 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.623095 | orchestrator | 2026-03-27 00:56:02.623099 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-27 00:56:02.623103 | orchestrator | Friday 27 March 2026 00:54:51 +0000 (0:00:00.773) 0:09:20.736 ********** 2026-03-27 00:56:02.623106 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.623110 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.623114 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.623117 | orchestrator | 2026-03-27 00:56:02.623121 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-27 00:56:02.623125 | orchestrator | Friday 27 March 2026 00:54:52 +0000 (0:00:01.225) 0:09:21.962 ********** 2026-03-27 00:56:02.623128 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.623132 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.623136 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.623139 | orchestrator | 2026-03-27 00:56:02.623143 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-27 00:56:02.623147 | orchestrator | Friday 27 March 2026 00:54:54 +0000 (0:00:01.317) 0:09:23.279 ********** 2026-03-27 00:56:02.623151 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.623154 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.623158 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.623162 | orchestrator | 2026-03-27 00:56:02.623165 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-27 00:56:02.623169 | orchestrator | Friday 27 March 2026 00:54:56 +0000 (0:00:02.097) 0:09:25.377 ********** 2026-03-27 00:56:02.623173 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.623176 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.623180 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.623184 | orchestrator | 2026-03-27 00:56:02.623187 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-27 00:56:02.623228 | orchestrator | Friday 27 March 2026 00:54:58 +0000 (0:00:02.359) 0:09:27.736 ********** 2026-03-27 00:56:02.623241 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623245 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623248 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623252 | orchestrator | 2026-03-27 00:56:02.623259 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-27 00:56:02.623263 | orchestrator | Friday 27 March 2026 00:54:59 +0000 (0:00:01.206) 0:09:28.943 ********** 2026-03-27 00:56:02.623267 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.623270 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.623274 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.623278 | orchestrator | 2026-03-27 00:56:02.623282 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-27 00:56:02.623285 | orchestrator | Friday 27 March 2026 00:55:00 +0000 (0:00:01.017) 0:09:29.960 ********** 2026-03-27 00:56:02.623289 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.623293 | orchestrator | 2026-03-27 00:56:02.623296 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-27 00:56:02.623300 | orchestrator | Friday 27 March 2026 00:55:01 +0000 (0:00:00.567) 0:09:30.528 ********** 2026-03-27 00:56:02.623304 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623308 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623311 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623315 | orchestrator | 2026-03-27 00:56:02.623319 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-27 00:56:02.623324 | orchestrator | Friday 27 March 2026 00:55:01 +0000 (0:00:00.311) 0:09:30.839 ********** 2026-03-27 00:56:02.623331 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.623335 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.623339 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.623343 | orchestrator | 2026-03-27 00:56:02.623346 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-27 00:56:02.623350 | orchestrator | Friday 27 March 2026 00:55:03 +0000 (0:00:01.484) 0:09:32.323 ********** 2026-03-27 00:56:02.623354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.623357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.623361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.623365 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623369 | orchestrator | 2026-03-27 00:56:02.623372 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-27 00:56:02.623376 | orchestrator | Friday 27 March 2026 00:55:03 +0000 (0:00:00.635) 0:09:32.959 ********** 2026-03-27 00:56:02.623380 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623383 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623387 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623391 | orchestrator | 2026-03-27 00:56:02.623394 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-27 00:56:02.623398 | orchestrator | 2026-03-27 00:56:02.623402 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-27 00:56:02.623406 | orchestrator | Friday 27 March 2026 00:55:04 +0000 (0:00:00.485) 0:09:33.444 ********** 2026-03-27 00:56:02.623409 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.623413 | orchestrator | 2026-03-27 00:56:02.623417 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-27 00:56:02.623421 | orchestrator | Friday 27 March 2026 00:55:05 +0000 (0:00:00.575) 0:09:34.020 ********** 2026-03-27 00:56:02.623425 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.623428 | orchestrator | 2026-03-27 00:56:02.623432 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-27 00:56:02.623436 | orchestrator | Friday 27 March 2026 00:55:05 +0000 (0:00:00.449) 0:09:34.469 ********** 2026-03-27 00:56:02.623440 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623443 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623447 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623451 | orchestrator | 2026-03-27 00:56:02.623455 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-27 00:56:02.623458 | orchestrator | Friday 27 March 2026 00:55:05 +0000 (0:00:00.415) 0:09:34.885 ********** 2026-03-27 00:56:02.623462 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623466 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623469 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623473 | orchestrator | 2026-03-27 00:56:02.623477 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-27 00:56:02.623481 | orchestrator | Friday 27 March 2026 00:55:06 +0000 (0:00:00.731) 0:09:35.617 ********** 2026-03-27 00:56:02.623484 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623488 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623492 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623495 | orchestrator | 2026-03-27 00:56:02.623499 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-27 00:56:02.623503 | orchestrator | Friday 27 March 2026 00:55:07 +0000 (0:00:00.627) 0:09:36.244 ********** 2026-03-27 00:56:02.623507 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623510 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623514 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623518 | orchestrator | 2026-03-27 00:56:02.623521 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-27 00:56:02.623527 | orchestrator | Friday 27 March 2026 00:55:07 +0000 (0:00:00.668) 0:09:36.913 ********** 2026-03-27 00:56:02.623531 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623535 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623539 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623542 | orchestrator | 2026-03-27 00:56:02.623546 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-27 00:56:02.623550 | orchestrator | Friday 27 March 2026 00:55:08 +0000 (0:00:00.590) 0:09:37.503 ********** 2026-03-27 00:56:02.623553 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623557 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623561 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623565 | orchestrator | 2026-03-27 00:56:02.623568 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-27 00:56:02.623574 | orchestrator | Friday 27 March 2026 00:55:08 +0000 (0:00:00.285) 0:09:37.789 ********** 2026-03-27 00:56:02.623578 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623582 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623586 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623589 | orchestrator | 2026-03-27 00:56:02.623593 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-27 00:56:02.623597 | orchestrator | Friday 27 March 2026 00:55:09 +0000 (0:00:00.263) 0:09:38.052 ********** 2026-03-27 00:56:02.623601 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623604 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623608 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623612 | orchestrator | 2026-03-27 00:56:02.623616 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-27 00:56:02.623619 | orchestrator | Friday 27 March 2026 00:55:09 +0000 (0:00:00.667) 0:09:38.720 ********** 2026-03-27 00:56:02.623624 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623630 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623636 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623643 | orchestrator | 2026-03-27 00:56:02.623649 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-27 00:56:02.623654 | orchestrator | Friday 27 March 2026 00:55:10 +0000 (0:00:00.852) 0:09:39.573 ********** 2026-03-27 00:56:02.623663 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623669 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623675 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623682 | orchestrator | 2026-03-27 00:56:02.623688 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-27 00:56:02.623694 | orchestrator | Friday 27 March 2026 00:55:10 +0000 (0:00:00.273) 0:09:39.846 ********** 2026-03-27 00:56:02.623700 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623706 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623712 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623718 | orchestrator | 2026-03-27 00:56:02.623725 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-27 00:56:02.623730 | orchestrator | Friday 27 March 2026 00:55:11 +0000 (0:00:00.248) 0:09:40.095 ********** 2026-03-27 00:56:02.623736 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623741 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623747 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623752 | orchestrator | 2026-03-27 00:56:02.623758 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-27 00:56:02.623764 | orchestrator | Friday 27 March 2026 00:55:11 +0000 (0:00:00.305) 0:09:40.400 ********** 2026-03-27 00:56:02.623770 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623778 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623781 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623785 | orchestrator | 2026-03-27 00:56:02.623789 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-27 00:56:02.623793 | orchestrator | Friday 27 March 2026 00:55:11 +0000 (0:00:00.551) 0:09:40.952 ********** 2026-03-27 00:56:02.623800 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623804 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623808 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623811 | orchestrator | 2026-03-27 00:56:02.623815 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-27 00:56:02.623819 | orchestrator | Friday 27 March 2026 00:55:12 +0000 (0:00:00.418) 0:09:41.371 ********** 2026-03-27 00:56:02.623822 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623826 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623830 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623833 | orchestrator | 2026-03-27 00:56:02.623837 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-27 00:56:02.623841 | orchestrator | Friday 27 March 2026 00:55:12 +0000 (0:00:00.390) 0:09:41.761 ********** 2026-03-27 00:56:02.623844 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623848 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623852 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623855 | orchestrator | 2026-03-27 00:56:02.623859 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-27 00:56:02.623863 | orchestrator | Friday 27 March 2026 00:55:13 +0000 (0:00:00.391) 0:09:42.152 ********** 2026-03-27 00:56:02.623867 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.623870 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.623874 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.623877 | orchestrator | 2026-03-27 00:56:02.623891 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-27 00:56:02.623895 | orchestrator | Friday 27 March 2026 00:55:13 +0000 (0:00:00.592) 0:09:42.745 ********** 2026-03-27 00:56:02.623899 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623903 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623906 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623910 | orchestrator | 2026-03-27 00:56:02.623914 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-27 00:56:02.623918 | orchestrator | Friday 27 March 2026 00:55:14 +0000 (0:00:00.287) 0:09:43.033 ********** 2026-03-27 00:56:02.623921 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.623925 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.623929 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.623932 | orchestrator | 2026-03-27 00:56:02.623936 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-27 00:56:02.623940 | orchestrator | Friday 27 March 2026 00:55:14 +0000 (0:00:00.506) 0:09:43.539 ********** 2026-03-27 00:56:02.623944 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.623948 | orchestrator | 2026-03-27 00:56:02.623951 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-27 00:56:02.623955 | orchestrator | Friday 27 March 2026 00:55:15 +0000 (0:00:00.820) 0:09:44.360 ********** 2026-03-27 00:56:02.623959 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.623962 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-27 00:56:02.623966 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-27 00:56:02.623970 | orchestrator | 2026-03-27 00:56:02.623977 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-27 00:56:02.623981 | orchestrator | Friday 27 March 2026 00:55:17 +0000 (0:00:01.876) 0:09:46.236 ********** 2026-03-27 00:56:02.623984 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-27 00:56:02.623988 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-27 00:56:02.623992 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.623996 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-27 00:56:02.623999 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-27 00:56:02.624003 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.624007 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-27 00:56:02.624014 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-27 00:56:02.624018 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.624022 | orchestrator | 2026-03-27 00:56:02.624025 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-27 00:56:02.624029 | orchestrator | Friday 27 March 2026 00:55:18 +0000 (0:00:01.198) 0:09:47.435 ********** 2026-03-27 00:56:02.624033 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.624036 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.624040 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.624044 | orchestrator | 2026-03-27 00:56:02.624049 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-27 00:56:02.624053 | orchestrator | Friday 27 March 2026 00:55:18 +0000 (0:00:00.349) 0:09:47.785 ********** 2026-03-27 00:56:02.624057 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.624061 | orchestrator | 2026-03-27 00:56:02.624065 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-27 00:56:02.624068 | orchestrator | Friday 27 March 2026 00:55:19 +0000 (0:00:00.814) 0:09:48.599 ********** 2026-03-27 00:56:02.624072 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.624076 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.624080 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.624084 | orchestrator | 2026-03-27 00:56:02.624087 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-27 00:56:02.624091 | orchestrator | Friday 27 March 2026 00:55:20 +0000 (0:00:00.853) 0:09:49.453 ********** 2026-03-27 00:56:02.624095 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.624098 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-27 00:56:02.624102 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.624106 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-27 00:56:02.624110 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.624113 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-27 00:56:02.624117 | orchestrator | 2026-03-27 00:56:02.624121 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-27 00:56:02.624124 | orchestrator | Friday 27 March 2026 00:55:23 +0000 (0:00:03.015) 0:09:52.468 ********** 2026-03-27 00:56:02.624128 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.624132 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-27 00:56:02.624135 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.624139 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-27 00:56:02.624143 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:56:02.624146 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-27 00:56:02.624150 | orchestrator | 2026-03-27 00:56:02.624154 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-27 00:56:02.624157 | orchestrator | Friday 27 March 2026 00:55:25 +0000 (0:00:02.270) 0:09:54.739 ********** 2026-03-27 00:56:02.624163 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-27 00:56:02.624167 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.624171 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-27 00:56:02.624175 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.624178 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-27 00:56:02.624182 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.624186 | orchestrator | 2026-03-27 00:56:02.624189 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-27 00:56:02.624193 | orchestrator | Friday 27 March 2026 00:55:26 +0000 (0:00:01.052) 0:09:55.791 ********** 2026-03-27 00:56:02.624197 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-27 00:56:02.624200 | orchestrator | 2026-03-27 00:56:02.624204 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-27 00:56:02.624208 | orchestrator | Friday 27 March 2026 00:55:27 +0000 (0:00:00.197) 0:09:55.988 ********** 2026-03-27 00:56:02.624214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624233 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.624237 | orchestrator | 2026-03-27 00:56:02.624241 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-27 00:56:02.624244 | orchestrator | Friday 27 March 2026 00:55:27 +0000 (0:00:00.509) 0:09:56.498 ********** 2026-03-27 00:56:02.624250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-27 00:56:02.624269 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.624273 | orchestrator | 2026-03-27 00:56:02.624276 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-27 00:56:02.624280 | orchestrator | Friday 27 March 2026 00:55:28 +0000 (0:00:00.511) 0:09:57.009 ********** 2026-03-27 00:56:02.624284 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-27 00:56:02.624288 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-27 00:56:02.624292 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-27 00:56:02.624295 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-27 00:56:02.624302 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-27 00:56:02.624305 | orchestrator | 2026-03-27 00:56:02.624309 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-27 00:56:02.624313 | orchestrator | Friday 27 March 2026 00:55:48 +0000 (0:00:20.659) 0:10:17.669 ********** 2026-03-27 00:56:02.624317 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.624320 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.624324 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.624328 | orchestrator | 2026-03-27 00:56:02.624332 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-27 00:56:02.624335 | orchestrator | Friday 27 March 2026 00:55:48 +0000 (0:00:00.284) 0:10:17.953 ********** 2026-03-27 00:56:02.624339 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.624343 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.624347 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.624350 | orchestrator | 2026-03-27 00:56:02.624354 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-27 00:56:02.624358 | orchestrator | Friday 27 March 2026 00:55:49 +0000 (0:00:00.469) 0:10:18.423 ********** 2026-03-27 00:56:02.624361 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.624365 | orchestrator | 2026-03-27 00:56:02.624369 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-27 00:56:02.624373 | orchestrator | Friday 27 March 2026 00:55:49 +0000 (0:00:00.462) 0:10:18.885 ********** 2026-03-27 00:56:02.624376 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.624380 | orchestrator | 2026-03-27 00:56:02.624384 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-27 00:56:02.624387 | orchestrator | Friday 27 March 2026 00:55:50 +0000 (0:00:00.625) 0:10:19.511 ********** 2026-03-27 00:56:02.624391 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.624395 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.624399 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.624402 | orchestrator | 2026-03-27 00:56:02.624406 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-27 00:56:02.624412 | orchestrator | Friday 27 March 2026 00:55:51 +0000 (0:00:01.179) 0:10:20.691 ********** 2026-03-27 00:56:02.624416 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.624420 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.624423 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.624427 | orchestrator | 2026-03-27 00:56:02.624431 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-27 00:56:02.624435 | orchestrator | Friday 27 March 2026 00:55:52 +0000 (0:00:01.118) 0:10:21.809 ********** 2026-03-27 00:56:02.624438 | orchestrator | changed: [testbed-node-4] 2026-03-27 00:56:02.624442 | orchestrator | changed: [testbed-node-3] 2026-03-27 00:56:02.624446 | orchestrator | changed: [testbed-node-5] 2026-03-27 00:56:02.624449 | orchestrator | 2026-03-27 00:56:02.624453 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-27 00:56:02.624457 | orchestrator | Friday 27 March 2026 00:55:54 +0000 (0:00:01.757) 0:10:23.567 ********** 2026-03-27 00:56:02.624461 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.624464 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.624470 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-27 00:56:02.624476 | orchestrator | 2026-03-27 00:56:02.624480 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-27 00:56:02.624484 | orchestrator | Friday 27 March 2026 00:55:57 +0000 (0:00:02.790) 0:10:26.357 ********** 2026-03-27 00:56:02.624487 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.624491 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.624495 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.624498 | orchestrator | 2026-03-27 00:56:02.624502 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-27 00:56:02.624506 | orchestrator | Friday 27 March 2026 00:55:57 +0000 (0:00:00.355) 0:10:26.713 ********** 2026-03-27 00:56:02.624509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:56:02.624513 | orchestrator | 2026-03-27 00:56:02.624517 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-27 00:56:02.624521 | orchestrator | Friday 27 March 2026 00:55:58 +0000 (0:00:00.911) 0:10:27.624 ********** 2026-03-27 00:56:02.624524 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.624528 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.624532 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.624535 | orchestrator | 2026-03-27 00:56:02.624539 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-27 00:56:02.624543 | orchestrator | Friday 27 March 2026 00:55:58 +0000 (0:00:00.317) 0:10:27.942 ********** 2026-03-27 00:56:02.624547 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.624550 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:56:02.624554 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:56:02.624558 | orchestrator | 2026-03-27 00:56:02.624561 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-27 00:56:02.624565 | orchestrator | Friday 27 March 2026 00:55:59 +0000 (0:00:00.353) 0:10:28.296 ********** 2026-03-27 00:56:02.624569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:56:02.624572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:56:02.624576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:56:02.624580 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:56:02.624584 | orchestrator | 2026-03-27 00:56:02.624587 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-27 00:56:02.624591 | orchestrator | Friday 27 March 2026 00:56:00 +0000 (0:00:00.961) 0:10:29.257 ********** 2026-03-27 00:56:02.624595 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:56:02.624598 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:56:02.624602 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:56:02.624606 | orchestrator | 2026-03-27 00:56:02.624609 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:56:02.624613 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-27 00:56:02.624618 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-27 00:56:02.624625 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-27 00:56:02.624630 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-27 00:56:02.624636 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-27 00:56:02.624642 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-27 00:56:02.624648 | orchestrator | 2026-03-27 00:56:02.624653 | orchestrator | 2026-03-27 00:56:02.624664 | orchestrator | 2026-03-27 00:56:02.624671 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:56:02.624678 | orchestrator | Friday 27 March 2026 00:56:00 +0000 (0:00:00.224) 0:10:29.482 ********** 2026-03-27 00:56:02.624691 | orchestrator | =============================================================================== 2026-03-27 00:56:02.624701 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 67.81s 2026-03-27 00:56:02.624708 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.92s 2026-03-27 00:56:02.624712 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.57s 2026-03-27 00:56:02.624716 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 20.66s 2026-03-27 00:56:02.624729 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.59s 2026-03-27 00:56:02.624733 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.93s 2026-03-27 00:56:02.624736 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.05s 2026-03-27 00:56:02.624740 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 7.56s 2026-03-27 00:56:02.624744 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.23s 2026-03-27 00:56:02.624747 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.01s 2026-03-27 00:56:02.624751 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 5.91s 2026-03-27 00:56:02.624759 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 5.38s 2026-03-27 00:56:02.624763 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 4.96s 2026-03-27 00:56:02.624767 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.65s 2026-03-27 00:56:02.624770 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.17s 2026-03-27 00:56:02.624774 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.90s 2026-03-27 00:56:02.624778 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.84s 2026-03-27 00:56:02.624781 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.68s 2026-03-27 00:56:02.624785 | orchestrator | ceph-container-common : Enable ceph.target ------------------------------ 3.49s 2026-03-27 00:56:02.624789 | orchestrator | ceph-osd : Set noup flag ------------------------------------------------ 3.46s 2026-03-27 00:56:02.624793 | orchestrator | 2026-03-27 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:05.650194 | orchestrator | 2026-03-27 00:56:05 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:56:05.651207 | orchestrator | 2026-03-27 00:56:05 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:05.653826 | orchestrator | 2026-03-27 00:56:05 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:05.653872 | orchestrator | 2026-03-27 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:08.700981 | orchestrator | 2026-03-27 00:56:08 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:56:08.702478 | orchestrator | 2026-03-27 00:56:08 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:08.703951 | orchestrator | 2026-03-27 00:56:08 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:08.704334 | orchestrator | 2026-03-27 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:11.748959 | orchestrator | 2026-03-27 00:56:11 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:56:11.750140 | orchestrator | 2026-03-27 00:56:11 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:11.751558 | orchestrator | 2026-03-27 00:56:11 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:11.751617 | orchestrator | 2026-03-27 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:14.798467 | orchestrator | 2026-03-27 00:56:14 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:56:14.800602 | orchestrator | 2026-03-27 00:56:14 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:14.802211 | orchestrator | 2026-03-27 00:56:14 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:14.802705 | orchestrator | 2026-03-27 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:17.849940 | orchestrator | 2026-03-27 00:56:17 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state STARTED 2026-03-27 00:56:17.850685 | orchestrator | 2026-03-27 00:56:17 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:17.853106 | orchestrator | 2026-03-27 00:56:17 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:17.853164 | orchestrator | 2026-03-27 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:20.895316 | orchestrator | 2026-03-27 00:56:20 | INFO  | Task fca60d80-e137-4a09-82d8-fe4c7203e6ac is in state SUCCESS 2026-03-27 00:56:20.896106 | orchestrator | 2026-03-27 00:56:20.896139 | orchestrator | 2026-03-27 00:56:20.896145 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:56:20.896150 | orchestrator | 2026-03-27 00:56:20.896154 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:56:20.896158 | orchestrator | Friday 27 March 2026 00:53:59 +0000 (0:00:00.410) 0:00:00.410 ********** 2026-03-27 00:56:20.896162 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:20.896166 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:20.896170 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:20.896174 | orchestrator | 2026-03-27 00:56:20.896178 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:56:20.896182 | orchestrator | Friday 27 March 2026 00:53:59 +0000 (0:00:00.336) 0:00:00.746 ********** 2026-03-27 00:56:20.896186 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-27 00:56:20.896190 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-27 00:56:20.896194 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-27 00:56:20.896198 | orchestrator | 2026-03-27 00:56:20.896202 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-27 00:56:20.896205 | orchestrator | 2026-03-27 00:56:20.896209 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-27 00:56:20.896224 | orchestrator | Friday 27 March 2026 00:54:00 +0000 (0:00:00.309) 0:00:01.056 ********** 2026-03-27 00:56:20.896233 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:20.896239 | orchestrator | 2026-03-27 00:56:20.896245 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-27 00:56:20.896250 | orchestrator | Friday 27 March 2026 00:54:00 +0000 (0:00:00.614) 0:00:01.670 ********** 2026-03-27 00:56:20.896257 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-27 00:56:20.896263 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-27 00:56:20.896269 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-27 00:56:20.896275 | orchestrator | 2026-03-27 00:56:20.896281 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-27 00:56:20.896288 | orchestrator | Friday 27 March 2026 00:54:01 +0000 (0:00:01.078) 0:00:02.748 ********** 2026-03-27 00:56:20.896309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896374 | orchestrator | 2026-03-27 00:56:20.896380 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-27 00:56:20.896386 | orchestrator | Friday 27 March 2026 00:54:03 +0000 (0:00:01.456) 0:00:04.204 ********** 2026-03-27 00:56:20.896392 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:20.896395 | orchestrator | 2026-03-27 00:56:20.896411 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-27 00:56:20.896415 | orchestrator | Friday 27 March 2026 00:54:03 +0000 (0:00:00.485) 0:00:04.690 ********** 2026-03-27 00:56:20.896424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896466 | orchestrator | 2026-03-27 00:56:20.896470 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-27 00:56:20.896473 | orchestrator | Friday 27 March 2026 00:54:06 +0000 (0:00:02.367) 0:00:07.057 ********** 2026-03-27 00:56:20.896477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-27 00:56:20.896482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-27 00:56:20.896486 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:20.896490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-27 00:56:20.896499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-27 00:56:20.896507 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:20.896511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-27 00:56:20.896515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-27 00:56:20.896519 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:20.896523 | orchestrator | 2026-03-27 00:56:20.896527 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-27 00:56:20.896531 | orchestrator | Friday 27 March 2026 00:54:06 +0000 (0:00:00.648) 0:00:07.706 ********** 2026-03-27 00:56:20.896535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-27 00:56:20.896545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-27 00:56:20.896552 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:20.896557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-27 00:56:20.896561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-27 00:56:20.896565 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:20.896569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-27 00:56:20.896576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-27 00:56:20.896582 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:20.896586 | orchestrator | 2026-03-27 00:56:20.896590 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-27 00:56:20.896595 | orchestrator | Friday 27 March 2026 00:54:07 +0000 (0:00:00.814) 0:00:08.521 ********** 2026-03-27 00:56:20.896600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896635 | orchestrator | 2026-03-27 00:56:20.896639 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-27 00:56:20.896643 | orchestrator | Friday 27 March 2026 00:54:10 +0000 (0:00:02.883) 0:00:11.405 ********** 2026-03-27 00:56:20.896647 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:20.896651 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:20.896655 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:20.896659 | orchestrator | 2026-03-27 00:56:20.896662 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-27 00:56:20.896666 | orchestrator | Friday 27 March 2026 00:54:13 +0000 (0:00:03.039) 0:00:14.444 ********** 2026-03-27 00:56:20.896670 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:20.896674 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:20.896678 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:20.896682 | orchestrator | 2026-03-27 00:56:20.896685 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-27 00:56:20.896689 | orchestrator | Friday 27 March 2026 00:54:15 +0000 (0:00:01.462) 0:00:15.907 ********** 2026-03-27 00:56:20.896693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-27 00:56:20.896715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-27 00:56:20.896735 | orchestrator | 2026-03-27 00:56:20.896739 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-27 00:56:20.896743 | orchestrator | Friday 27 March 2026 00:54:17 +0000 (0:00:02.317) 0:00:18.224 ********** 2026-03-27 00:56:20.896748 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:20.896752 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:20.896758 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:20.896762 | orchestrator | 2026-03-27 00:56:20.896767 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-27 00:56:20.896771 | orchestrator | Friday 27 March 2026 00:54:18 +0000 (0:00:00.695) 0:00:18.919 ********** 2026-03-27 00:56:20.896776 | orchestrator | 2026-03-27 00:56:20.896780 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-27 00:56:20.896784 | orchestrator | Friday 27 March 2026 00:54:18 +0000 (0:00:00.123) 0:00:19.042 ********** 2026-03-27 00:56:20.896789 | orchestrator | 2026-03-27 00:56:20.896793 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-27 00:56:20.896797 | orchestrator | Friday 27 March 2026 00:54:18 +0000 (0:00:00.064) 0:00:19.107 ********** 2026-03-27 00:56:20.896802 | orchestrator | 2026-03-27 00:56:20.896806 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-27 00:56:20.896810 | orchestrator | Friday 27 March 2026 00:54:18 +0000 (0:00:00.079) 0:00:19.187 ********** 2026-03-27 00:56:20.896816 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:20.896822 | orchestrator | 2026-03-27 00:56:20.896830 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-27 00:56:20.896838 | orchestrator | Friday 27 March 2026 00:54:18 +0000 (0:00:00.205) 0:00:19.392 ********** 2026-03-27 00:56:20.896844 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:20.896850 | orchestrator | 2026-03-27 00:56:20.896855 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-27 00:56:20.896861 | orchestrator | Friday 27 March 2026 00:54:18 +0000 (0:00:00.277) 0:00:19.670 ********** 2026-03-27 00:56:20.896867 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:20.896873 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:20.896899 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:20.896905 | orchestrator | 2026-03-27 00:56:20.896912 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-27 00:56:20.896919 | orchestrator | Friday 27 March 2026 00:55:06 +0000 (0:00:47.983) 0:01:07.653 ********** 2026-03-27 00:56:20.896925 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:20.896932 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:20.896938 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:20.896945 | orchestrator | 2026-03-27 00:56:20.896951 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-27 00:56:20.896957 | orchestrator | Friday 27 March 2026 00:56:03 +0000 (0:00:56.720) 0:02:04.374 ********** 2026-03-27 00:56:20.896965 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:20.896969 | orchestrator | 2026-03-27 00:56:20.896972 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-27 00:56:20.896976 | orchestrator | Friday 27 March 2026 00:56:04 +0000 (0:00:00.584) 0:02:04.958 ********** 2026-03-27 00:56:20.896980 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:20.896984 | orchestrator | 2026-03-27 00:56:20.896988 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-27 00:56:20.896991 | orchestrator | Friday 27 March 2026 00:56:06 +0000 (0:00:02.597) 0:02:07.556 ********** 2026-03-27 00:56:20.896995 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:20.896999 | orchestrator | 2026-03-27 00:56:20.897003 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-27 00:56:20.897007 | orchestrator | Friday 27 March 2026 00:56:09 +0000 (0:00:02.491) 0:02:10.048 ********** 2026-03-27 00:56:20.897010 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:20.897014 | orchestrator | 2026-03-27 00:56:20.897018 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-27 00:56:20.897022 | orchestrator | Friday 27 March 2026 00:56:12 +0000 (0:00:02.886) 0:02:12.935 ********** 2026-03-27 00:56:20.897026 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:20.897029 | orchestrator | 2026-03-27 00:56:20.897033 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-27 00:56:20.897037 | orchestrator | Friday 27 March 2026 00:56:14 +0000 (0:00:02.917) 0:02:15.852 ********** 2026-03-27 00:56:20.897041 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:20.897045 | orchestrator | 2026-03-27 00:56:20.897048 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:56:20.897053 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 00:56:20.897058 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 00:56:20.897066 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 00:56:20.897069 | orchestrator | 2026-03-27 00:56:20.897073 | orchestrator | 2026-03-27 00:56:20.897077 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:56:20.897081 | orchestrator | Friday 27 March 2026 00:56:18 +0000 (0:00:03.192) 0:02:19.044 ********** 2026-03-27 00:56:20.897084 | orchestrator | =============================================================================== 2026-03-27 00:56:20.897088 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 56.72s 2026-03-27 00:56:20.897092 | orchestrator | opensearch : Restart opensearch container ------------------------------ 47.98s 2026-03-27 00:56:20.897096 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.19s 2026-03-27 00:56:20.897099 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.04s 2026-03-27 00:56:20.897103 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.92s 2026-03-27 00:56:20.897107 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.89s 2026-03-27 00:56:20.897113 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.88s 2026-03-27 00:56:20.897117 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.60s 2026-03-27 00:56:20.897181 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.49s 2026-03-27 00:56:20.897186 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.37s 2026-03-27 00:56:20.897189 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.32s 2026-03-27 00:56:20.897193 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.46s 2026-03-27 00:56:20.897200 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.46s 2026-03-27 00:56:20.897204 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.08s 2026-03-27 00:56:20.897208 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.81s 2026-03-27 00:56:20.897212 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2026-03-27 00:56:20.897216 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.65s 2026-03-27 00:56:20.897225 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.61s 2026-03-27 00:56:20.897229 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2026-03-27 00:56:20.897232 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-03-27 00:56:20.897238 | orchestrator | 2026-03-27 00:56:20 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:20.899283 | orchestrator | 2026-03-27 00:56:20 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:20.899651 | orchestrator | 2026-03-27 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:23.954723 | orchestrator | 2026-03-27 00:56:23 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:23.957168 | orchestrator | 2026-03-27 00:56:23 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:23.957221 | orchestrator | 2026-03-27 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:26.995289 | orchestrator | 2026-03-27 00:56:26 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:26.996204 | orchestrator | 2026-03-27 00:56:26 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:26.996236 | orchestrator | 2026-03-27 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:30.047910 | orchestrator | 2026-03-27 00:56:30 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:30.049446 | orchestrator | 2026-03-27 00:56:30 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:30.049475 | orchestrator | 2026-03-27 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:33.092551 | orchestrator | 2026-03-27 00:56:33 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:33.093404 | orchestrator | 2026-03-27 00:56:33 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:33.093446 | orchestrator | 2026-03-27 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:36.146794 | orchestrator | 2026-03-27 00:56:36 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:36.148532 | orchestrator | 2026-03-27 00:56:36 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:36.148580 | orchestrator | 2026-03-27 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:39.202171 | orchestrator | 2026-03-27 00:56:39 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:39.204003 | orchestrator | 2026-03-27 00:56:39 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:39.204054 | orchestrator | 2026-03-27 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:42.248454 | orchestrator | 2026-03-27 00:56:42 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:42.250241 | orchestrator | 2026-03-27 00:56:42 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:42.250323 | orchestrator | 2026-03-27 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:45.290321 | orchestrator | 2026-03-27 00:56:45 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:45.291579 | orchestrator | 2026-03-27 00:56:45 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:45.291635 | orchestrator | 2026-03-27 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:48.331016 | orchestrator | 2026-03-27 00:56:48 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:48.333064 | orchestrator | 2026-03-27 00:56:48 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:48.333135 | orchestrator | 2026-03-27 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:51.371193 | orchestrator | 2026-03-27 00:56:51 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:51.372658 | orchestrator | 2026-03-27 00:56:51 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state STARTED 2026-03-27 00:56:51.372757 | orchestrator | 2026-03-27 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:54.416248 | orchestrator | 2026-03-27 00:56:54 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:56:54.420364 | orchestrator | 2026-03-27 00:56:54 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:54.423381 | orchestrator | 2026-03-27 00:56:54 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:56:54.426492 | orchestrator | 2026-03-27 00:56:54 | INFO  | Task 04156ecf-af2c-4e57-99ac-a7459320d926 is in state SUCCESS 2026-03-27 00:56:54.427432 | orchestrator | 2026-03-27 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:56:54.428558 | orchestrator | 2026-03-27 00:56:54.428593 | orchestrator | 2026-03-27 00:56:54.428599 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-27 00:56:54.428604 | orchestrator | 2026-03-27 00:56:54.428610 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-27 00:56:54.428616 | orchestrator | Friday 27 March 2026 00:53:59 +0000 (0:00:00.101) 0:00:00.101 ********** 2026-03-27 00:56:54.428621 | orchestrator | ok: [localhost] => { 2026-03-27 00:56:54.428627 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-27 00:56:54.428677 | orchestrator | } 2026-03-27 00:56:54.428686 | orchestrator | 2026-03-27 00:56:54.428693 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-27 00:56:54.428700 | orchestrator | Friday 27 March 2026 00:53:59 +0000 (0:00:00.055) 0:00:00.156 ********** 2026-03-27 00:56:54.428798 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-27 00:56:54.428811 | orchestrator | ...ignoring 2026-03-27 00:56:54.428818 | orchestrator | 2026-03-27 00:56:54.428825 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-27 00:56:54.428832 | orchestrator | Friday 27 March 2026 00:54:02 +0000 (0:00:02.896) 0:00:03.053 ********** 2026-03-27 00:56:54.428838 | orchestrator | skipping: [localhost] 2026-03-27 00:56:54.428844 | orchestrator | 2026-03-27 00:56:54.428848 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-27 00:56:54.428852 | orchestrator | Friday 27 March 2026 00:54:02 +0000 (0:00:00.053) 0:00:03.106 ********** 2026-03-27 00:56:54.428856 | orchestrator | ok: [localhost] 2026-03-27 00:56:54.428860 | orchestrator | 2026-03-27 00:56:54.428863 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:56:54.428893 | orchestrator | 2026-03-27 00:56:54.428897 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:56:54.428901 | orchestrator | Friday 27 March 2026 00:54:02 +0000 (0:00:00.237) 0:00:03.344 ********** 2026-03-27 00:56:54.428905 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.428909 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:54.428913 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:54.428917 | orchestrator | 2026-03-27 00:56:54.428920 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:56:54.428924 | orchestrator | Friday 27 March 2026 00:54:02 +0000 (0:00:00.312) 0:00:03.656 ********** 2026-03-27 00:56:54.428928 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-27 00:56:54.428932 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-27 00:56:54.428936 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-27 00:56:54.428939 | orchestrator | 2026-03-27 00:56:54.428943 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-27 00:56:54.428947 | orchestrator | 2026-03-27 00:56:54.428951 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-27 00:56:54.428955 | orchestrator | Friday 27 March 2026 00:54:03 +0000 (0:00:00.408) 0:00:04.064 ********** 2026-03-27 00:56:54.428958 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-27 00:56:54.428962 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-27 00:56:54.428966 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-27 00:56:54.428970 | orchestrator | 2026-03-27 00:56:54.428973 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-27 00:56:54.428977 | orchestrator | Friday 27 March 2026 00:54:03 +0000 (0:00:00.371) 0:00:04.436 ********** 2026-03-27 00:56:54.428981 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:54.428985 | orchestrator | 2026-03-27 00:56:54.428989 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-27 00:56:54.428993 | orchestrator | Friday 27 March 2026 00:54:04 +0000 (0:00:00.583) 0:00:05.020 ********** 2026-03-27 00:56:54.429014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-27 00:56:54.429026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-27 00:56:54.429033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-27 00:56:54.429037 | orchestrator | 2026-03-27 00:56:54.429046 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-27 00:56:54.429050 | orchestrator | Friday 27 March 2026 00:54:06 +0000 (0:00:02.773) 0:00:07.793 ********** 2026-03-27 00:56:54.429054 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429058 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429065 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.429069 | orchestrator | 2026-03-27 00:56:54.429072 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-27 00:56:54.429076 | orchestrator | Friday 27 March 2026 00:54:07 +0000 (0:00:00.685) 0:00:08.479 ********** 2026-03-27 00:56:54.429080 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429083 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429087 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.429092 | orchestrator | 2026-03-27 00:56:54.429098 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-27 00:56:54.429108 | orchestrator | Friday 27 March 2026 00:54:08 +0000 (0:00:01.340) 0:00:09.819 ********** 2026-03-27 00:56:54.429116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-27 00:56:54.429131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-27 00:56:54.429143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-27 00:56:54.429150 | orchestrator | 2026-03-27 00:56:54.429156 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-27 00:56:54.429162 | orchestrator | Friday 27 March 2026 00:54:13 +0000 (0:00:04.508) 0:00:14.328 ********** 2026-03-27 00:56:54.429169 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429177 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429184 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.429191 | orchestrator | 2026-03-27 00:56:54.429198 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-27 00:56:54.429204 | orchestrator | Friday 27 March 2026 00:54:14 +0000 (0:00:01.156) 0:00:15.485 ********** 2026-03-27 00:56:54.429207 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.429211 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:54.429215 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:54.429218 | orchestrator | 2026-03-27 00:56:54.429222 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-27 00:56:54.429226 | orchestrator | Friday 27 March 2026 00:54:19 +0000 (0:00:04.665) 0:00:20.151 ********** 2026-03-27 00:56:54.429230 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:54.429234 | orchestrator | 2026-03-27 00:56:54.429240 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-27 00:56:54.429244 | orchestrator | Friday 27 March 2026 00:54:19 +0000 (0:00:00.565) 0:00:20.716 ********** 2026-03-27 00:56:54.429252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:56:54.429259 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:56:54.429268 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:56:54.429283 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429287 | orchestrator | 2026-03-27 00:56:54.429291 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-27 00:56:54.429295 | orchestrator | Friday 27 March 2026 00:54:23 +0000 (0:00:03.280) 0:00:23.997 ********** 2026-03-27 00:56:54.429299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:56:54.429303 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:56:54.429322 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:56:54.429330 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429334 | orchestrator | 2026-03-27 00:56:54.429337 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-27 00:56:54.429341 | orchestrator | Friday 27 March 2026 00:54:25 +0000 (0:00:02.828) 0:00:26.826 ********** 2026-03-27 00:56:54.429347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:56:54.429355 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:56:54.429366 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-27 00:56:54.429379 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429383 | orchestrator | 2026-03-27 00:56:54.429386 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-27 00:56:54.429390 | orchestrator | Friday 27 March 2026 00:54:29 +0000 (0:00:03.284) 0:00:30.111 ********** 2026-03-27 00:56:54.429397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-27 00:56:54.429404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-27 00:56:54.429414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-27 00:56:54.429419 | orchestrator | 2026-03-27 00:56:54.429422 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-27 00:56:54.429426 | orchestrator | Friday 27 March 2026 00:54:32 +0000 (0:00:03.720) 0:00:33.831 ********** 2026-03-27 00:56:54.429430 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.429434 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:54.429438 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:54.429442 | orchestrator | 2026-03-27 00:56:54.429446 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-27 00:56:54.429451 | orchestrator | Friday 27 March 2026 00:54:33 +0000 (0:00:00.813) 0:00:34.645 ********** 2026-03-27 00:56:54.429455 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.429459 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:54.429463 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:54.429468 | orchestrator | 2026-03-27 00:56:54.429472 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-27 00:56:54.429478 | orchestrator | Friday 27 March 2026 00:54:33 +0000 (0:00:00.274) 0:00:34.919 ********** 2026-03-27 00:56:54.429487 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.429495 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:54.429500 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:54.429507 | orchestrator | 2026-03-27 00:56:54.429513 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-27 00:56:54.429524 | orchestrator | Friday 27 March 2026 00:54:34 +0000 (0:00:00.295) 0:00:35.214 ********** 2026-03-27 00:56:54.429531 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-27 00:56:54.429537 | orchestrator | ...ignoring 2026-03-27 00:56:54.429545 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-27 00:56:54.429551 | orchestrator | ...ignoring 2026-03-27 00:56:54.429557 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-27 00:56:54.429563 | orchestrator | ...ignoring 2026-03-27 00:56:54.429570 | orchestrator | 2026-03-27 00:56:54.429577 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-27 00:56:54.429583 | orchestrator | Friday 27 March 2026 00:54:45 +0000 (0:00:11.025) 0:00:46.240 ********** 2026-03-27 00:56:54.429590 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.429597 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:54.429604 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:54.429609 | orchestrator | 2026-03-27 00:56:54.429619 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-27 00:56:54.429625 | orchestrator | Friday 27 March 2026 00:54:45 +0000 (0:00:00.423) 0:00:46.664 ********** 2026-03-27 00:56:54.429631 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429639 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429645 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429651 | orchestrator | 2026-03-27 00:56:54.429657 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-27 00:56:54.429661 | orchestrator | Friday 27 March 2026 00:54:46 +0000 (0:00:00.423) 0:00:47.088 ********** 2026-03-27 00:56:54.429664 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429668 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429672 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429676 | orchestrator | 2026-03-27 00:56:54.429679 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-27 00:56:54.429683 | orchestrator | Friday 27 March 2026 00:54:46 +0000 (0:00:00.445) 0:00:47.533 ********** 2026-03-27 00:56:54.429687 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429691 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429694 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429698 | orchestrator | 2026-03-27 00:56:54.429702 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-27 00:56:54.429706 | orchestrator | Friday 27 March 2026 00:54:47 +0000 (0:00:00.883) 0:00:48.417 ********** 2026-03-27 00:56:54.429709 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.429713 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:54.429717 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:54.429720 | orchestrator | 2026-03-27 00:56:54.429724 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-27 00:56:54.429728 | orchestrator | Friday 27 March 2026 00:54:47 +0000 (0:00:00.451) 0:00:48.868 ********** 2026-03-27 00:56:54.429735 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429739 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429743 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429747 | orchestrator | 2026-03-27 00:56:54.429750 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-27 00:56:54.429754 | orchestrator | Friday 27 March 2026 00:54:48 +0000 (0:00:00.421) 0:00:49.290 ********** 2026-03-27 00:56:54.429758 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429762 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429765 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-27 00:56:54.429769 | orchestrator | 2026-03-27 00:56:54.429773 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-27 00:56:54.429780 | orchestrator | Friday 27 March 2026 00:54:48 +0000 (0:00:00.359) 0:00:49.650 ********** 2026-03-27 00:56:54.429784 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.429788 | orchestrator | 2026-03-27 00:56:54.429792 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-27 00:56:54.429798 | orchestrator | Friday 27 March 2026 00:54:58 +0000 (0:00:10.050) 0:00:59.700 ********** 2026-03-27 00:56:54.429804 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.429810 | orchestrator | 2026-03-27 00:56:54.429816 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-27 00:56:54.429822 | orchestrator | Friday 27 March 2026 00:54:59 +0000 (0:00:00.286) 0:00:59.986 ********** 2026-03-27 00:56:54.429828 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429835 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429842 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429846 | orchestrator | 2026-03-27 00:56:54.429850 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-27 00:56:54.429854 | orchestrator | Friday 27 March 2026 00:54:59 +0000 (0:00:00.813) 0:01:00.800 ********** 2026-03-27 00:56:54.429858 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.429861 | orchestrator | 2026-03-27 00:56:54.429910 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-27 00:56:54.429916 | orchestrator | Friday 27 March 2026 00:55:06 +0000 (0:00:06.972) 0:01:07.772 ********** 2026-03-27 00:56:54.429919 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.429923 | orchestrator | 2026-03-27 00:56:54.429927 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-27 00:56:54.429931 | orchestrator | Friday 27 March 2026 00:55:08 +0000 (0:00:01.502) 0:01:09.275 ********** 2026-03-27 00:56:54.429934 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.429938 | orchestrator | 2026-03-27 00:56:54.429942 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-27 00:56:54.429945 | orchestrator | Friday 27 March 2026 00:55:10 +0000 (0:00:01.956) 0:01:11.232 ********** 2026-03-27 00:56:54.429949 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.429953 | orchestrator | 2026-03-27 00:56:54.429957 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-27 00:56:54.429961 | orchestrator | Friday 27 March 2026 00:55:10 +0000 (0:00:00.236) 0:01:11.468 ********** 2026-03-27 00:56:54.429964 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429968 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.429972 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.429976 | orchestrator | 2026-03-27 00:56:54.429979 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-27 00:56:54.429983 | orchestrator | Friday 27 March 2026 00:55:10 +0000 (0:00:00.281) 0:01:11.750 ********** 2026-03-27 00:56:54.429987 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.429991 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:54.429994 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:54.429998 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-27 00:56:54.430002 | orchestrator | 2026-03-27 00:56:54.430006 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-27 00:56:54.430009 | orchestrator | skipping: no hosts matched 2026-03-27 00:56:54.430035 | orchestrator | 2026-03-27 00:56:54.430039 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-27 00:56:54.430043 | orchestrator | 2026-03-27 00:56:54.430047 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-27 00:56:54.430054 | orchestrator | Friday 27 March 2026 00:55:11 +0000 (0:00:00.279) 0:01:12.030 ********** 2026-03-27 00:56:54.430058 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:56:54.430061 | orchestrator | 2026-03-27 00:56:54.430065 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-27 00:56:54.430073 | orchestrator | Friday 27 March 2026 00:55:26 +0000 (0:00:15.410) 0:01:27.440 ********** 2026-03-27 00:56:54.430077 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:54.430080 | orchestrator | 2026-03-27 00:56:54.430084 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-27 00:56:54.430088 | orchestrator | Friday 27 March 2026 00:55:40 +0000 (0:00:14.467) 0:01:41.908 ********** 2026-03-27 00:56:54.430092 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:54.430095 | orchestrator | 2026-03-27 00:56:54.430101 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-27 00:56:54.430108 | orchestrator | 2026-03-27 00:56:54.430114 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-27 00:56:54.430120 | orchestrator | Friday 27 March 2026 00:55:43 +0000 (0:00:02.386) 0:01:44.294 ********** 2026-03-27 00:56:54.430126 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:56:54.430132 | orchestrator | 2026-03-27 00:56:54.430138 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-27 00:56:54.430144 | orchestrator | Friday 27 March 2026 00:55:59 +0000 (0:00:15.939) 0:02:00.234 ********** 2026-03-27 00:56:54.430149 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:54.430157 | orchestrator | 2026-03-27 00:56:54.430163 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-27 00:56:54.430171 | orchestrator | Friday 27 March 2026 00:56:16 +0000 (0:00:16.935) 0:02:17.169 ********** 2026-03-27 00:56:54.430177 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:54.430184 | orchestrator | 2026-03-27 00:56:54.430190 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-27 00:56:54.430196 | orchestrator | 2026-03-27 00:56:54.430208 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-27 00:56:54.430215 | orchestrator | Friday 27 March 2026 00:56:18 +0000 (0:00:02.405) 0:02:19.575 ********** 2026-03-27 00:56:54.430221 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.430227 | orchestrator | 2026-03-27 00:56:54.430231 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-27 00:56:54.430235 | orchestrator | Friday 27 March 2026 00:56:30 +0000 (0:00:12.279) 0:02:31.854 ********** 2026-03-27 00:56:54.430239 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.430243 | orchestrator | 2026-03-27 00:56:54.430247 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-27 00:56:54.430250 | orchestrator | Friday 27 March 2026 00:56:35 +0000 (0:00:04.575) 0:02:36.430 ********** 2026-03-27 00:56:54.430254 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.430258 | orchestrator | 2026-03-27 00:56:54.430262 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-27 00:56:54.430265 | orchestrator | 2026-03-27 00:56:54.430269 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-27 00:56:54.430273 | orchestrator | Friday 27 March 2026 00:56:37 +0000 (0:00:02.414) 0:02:38.844 ********** 2026-03-27 00:56:54.430277 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:56:54.430281 | orchestrator | 2026-03-27 00:56:54.430284 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-27 00:56:54.430288 | orchestrator | Friday 27 March 2026 00:56:38 +0000 (0:00:00.686) 0:02:39.531 ********** 2026-03-27 00:56:54.430292 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.430296 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.430300 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.430303 | orchestrator | 2026-03-27 00:56:54.430307 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-27 00:56:54.430311 | orchestrator | Friday 27 March 2026 00:56:41 +0000 (0:00:02.702) 0:02:42.233 ********** 2026-03-27 00:56:54.430314 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.430318 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.430322 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.430326 | orchestrator | 2026-03-27 00:56:54.430329 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-27 00:56:54.430337 | orchestrator | Friday 27 March 2026 00:56:43 +0000 (0:00:02.419) 0:02:44.652 ********** 2026-03-27 00:56:54.430340 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.430344 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.430348 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.430351 | orchestrator | 2026-03-27 00:56:54.430355 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-27 00:56:54.430359 | orchestrator | Friday 27 March 2026 00:56:46 +0000 (0:00:02.460) 0:02:47.113 ********** 2026-03-27 00:56:54.430363 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.430366 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.430370 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:56:54.430374 | orchestrator | 2026-03-27 00:56:54.430378 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-27 00:56:54.430382 | orchestrator | Friday 27 March 2026 00:56:48 +0000 (0:00:02.544) 0:02:49.657 ********** 2026-03-27 00:56:54.430385 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:56:54.430389 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:56:54.430393 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:56:54.430397 | orchestrator | 2026-03-27 00:56:54.430400 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-27 00:56:54.430404 | orchestrator | Friday 27 March 2026 00:56:51 +0000 (0:00:02.564) 0:02:52.222 ********** 2026-03-27 00:56:54.430408 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:56:54.430412 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:56:54.430415 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:56:54.430419 | orchestrator | 2026-03-27 00:56:54.430423 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:56:54.430427 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-27 00:56:54.430435 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-27 00:56:54.430443 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-27 00:56:54.430447 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-27 00:56:54.430450 | orchestrator | 2026-03-27 00:56:54.430454 | orchestrator | 2026-03-27 00:56:54.430458 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:56:54.430462 | orchestrator | Friday 27 March 2026 00:56:51 +0000 (0:00:00.194) 0:02:52.417 ********** 2026-03-27 00:56:54.430465 | orchestrator | =============================================================================== 2026-03-27 00:56:54.430469 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.40s 2026-03-27 00:56:54.430473 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 31.35s 2026-03-27 00:56:54.430476 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.28s 2026-03-27 00:56:54.430480 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.03s 2026-03-27 00:56:54.430484 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.05s 2026-03-27 00:56:54.430488 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.97s 2026-03-27 00:56:54.430494 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.79s 2026-03-27 00:56:54.430498 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.67s 2026-03-27 00:56:54.430501 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.58s 2026-03-27 00:56:54.430505 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.51s 2026-03-27 00:56:54.430512 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.72s 2026-03-27 00:56:54.430516 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.28s 2026-03-27 00:56:54.430520 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.28s 2026-03-27 00:56:54.430523 | orchestrator | Check MariaDB service --------------------------------------------------- 2.90s 2026-03-27 00:56:54.430527 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.83s 2026-03-27 00:56:54.430531 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.77s 2026-03-27 00:56:54.430535 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.70s 2026-03-27 00:56:54.430538 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.56s 2026-03-27 00:56:54.430542 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.54s 2026-03-27 00:56:54.430546 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.46s 2026-03-27 00:56:57.459953 | orchestrator | 2026-03-27 00:56:57 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:56:57.460825 | orchestrator | 2026-03-27 00:56:57 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:56:57.461860 | orchestrator | 2026-03-27 00:56:57 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:56:57.461927 | orchestrator | 2026-03-27 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:00.496359 | orchestrator | 2026-03-27 00:57:00 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:00.496437 | orchestrator | 2026-03-27 00:57:00 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:00.499094 | orchestrator | 2026-03-27 00:57:00 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:00.499153 | orchestrator | 2026-03-27 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:03.538829 | orchestrator | 2026-03-27 00:57:03 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:03.539767 | orchestrator | 2026-03-27 00:57:03 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:03.540851 | orchestrator | 2026-03-27 00:57:03 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:03.541282 | orchestrator | 2026-03-27 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:06.573763 | orchestrator | 2026-03-27 00:57:06 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:06.574470 | orchestrator | 2026-03-27 00:57:06 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:06.575465 | orchestrator | 2026-03-27 00:57:06 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:06.575522 | orchestrator | 2026-03-27 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:09.616531 | orchestrator | 2026-03-27 00:57:09 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:09.616946 | orchestrator | 2026-03-27 00:57:09 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:09.617955 | orchestrator | 2026-03-27 00:57:09 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:09.617983 | orchestrator | 2026-03-27 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:12.661440 | orchestrator | 2026-03-27 00:57:12 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:12.661994 | orchestrator | 2026-03-27 00:57:12 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:12.662826 | orchestrator | 2026-03-27 00:57:12 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:12.662858 | orchestrator | 2026-03-27 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:15.705144 | orchestrator | 2026-03-27 00:57:15 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:15.706397 | orchestrator | 2026-03-27 00:57:15 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:15.707208 | orchestrator | 2026-03-27 00:57:15 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:15.707229 | orchestrator | 2026-03-27 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:18.733635 | orchestrator | 2026-03-27 00:57:18 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:18.736255 | orchestrator | 2026-03-27 00:57:18 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:18.738923 | orchestrator | 2026-03-27 00:57:18 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:18.739080 | orchestrator | 2026-03-27 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:21.781267 | orchestrator | 2026-03-27 00:57:21 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:21.783140 | orchestrator | 2026-03-27 00:57:21 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:21.785452 | orchestrator | 2026-03-27 00:57:21 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:21.785503 | orchestrator | 2026-03-27 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:24.825261 | orchestrator | 2026-03-27 00:57:24 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:24.827076 | orchestrator | 2026-03-27 00:57:24 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:24.830116 | orchestrator | 2026-03-27 00:57:24 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:24.830154 | orchestrator | 2026-03-27 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:27.878501 | orchestrator | 2026-03-27 00:57:27 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:27.882963 | orchestrator | 2026-03-27 00:57:27 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:27.885808 | orchestrator | 2026-03-27 00:57:27 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:27.885868 | orchestrator | 2026-03-27 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:30.919733 | orchestrator | 2026-03-27 00:57:30 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:30.922563 | orchestrator | 2026-03-27 00:57:30 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:30.925633 | orchestrator | 2026-03-27 00:57:30 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:30.925685 | orchestrator | 2026-03-27 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:33.972646 | orchestrator | 2026-03-27 00:57:33 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:33.974454 | orchestrator | 2026-03-27 00:57:33 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:33.976744 | orchestrator | 2026-03-27 00:57:33 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:33.976793 | orchestrator | 2026-03-27 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:37.034296 | orchestrator | 2026-03-27 00:57:37 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:37.035968 | orchestrator | 2026-03-27 00:57:37 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:37.038941 | orchestrator | 2026-03-27 00:57:37 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:37.038989 | orchestrator | 2026-03-27 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:40.081338 | orchestrator | 2026-03-27 00:57:40 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:40.084941 | orchestrator | 2026-03-27 00:57:40 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:40.087098 | orchestrator | 2026-03-27 00:57:40 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:40.087224 | orchestrator | 2026-03-27 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:43.129195 | orchestrator | 2026-03-27 00:57:43 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:43.130142 | orchestrator | 2026-03-27 00:57:43 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:43.131251 | orchestrator | 2026-03-27 00:57:43 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:43.131459 | orchestrator | 2026-03-27 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:46.173311 | orchestrator | 2026-03-27 00:57:46 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:46.174835 | orchestrator | 2026-03-27 00:57:46 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:46.176670 | orchestrator | 2026-03-27 00:57:46 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:46.176742 | orchestrator | 2026-03-27 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:49.221091 | orchestrator | 2026-03-27 00:57:49 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:49.223084 | orchestrator | 2026-03-27 00:57:49 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state STARTED 2026-03-27 00:57:49.225034 | orchestrator | 2026-03-27 00:57:49 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:49.225089 | orchestrator | 2026-03-27 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:52.278910 | orchestrator | 2026-03-27 00:57:52 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:52.284727 | orchestrator | 2026-03-27 00:57:52 | INFO  | Task 5dbdfa0e-d27e-4e45-abaf-a0c0b66e2b13 is in state SUCCESS 2026-03-27 00:57:52.286880 | orchestrator | 2026-03-27 00:57:52.286940 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-27 00:57:52.286946 | orchestrator | 2.16.14 2026-03-27 00:57:52.286950 | orchestrator | 2026-03-27 00:57:52.286954 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-27 00:57:52.286958 | orchestrator | 2026-03-27 00:57:52.286985 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-27 00:57:52.286990 | orchestrator | Friday 27 March 2026 00:56:05 +0000 (0:00:00.563) 0:00:00.563 ********** 2026-03-27 00:57:52.287005 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:57:52.287009 | orchestrator | 2026-03-27 00:57:52.287012 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-27 00:57:52.287015 | orchestrator | Friday 27 March 2026 00:56:05 +0000 (0:00:00.633) 0:00:01.197 ********** 2026-03-27 00:57:52.287018 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287022 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.287025 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.287028 | orchestrator | 2026-03-27 00:57:52.287031 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-27 00:57:52.287034 | orchestrator | Friday 27 March 2026 00:56:06 +0000 (0:00:01.003) 0:00:02.201 ********** 2026-03-27 00:57:52.287037 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287040 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.287043 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.287046 | orchestrator | 2026-03-27 00:57:52.287052 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-27 00:57:52.287057 | orchestrator | Friday 27 March 2026 00:56:07 +0000 (0:00:00.277) 0:00:02.478 ********** 2026-03-27 00:57:52.287062 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287067 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.287072 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.287077 | orchestrator | 2026-03-27 00:57:52.287082 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-27 00:57:52.287087 | orchestrator | Friday 27 March 2026 00:56:07 +0000 (0:00:00.855) 0:00:03.333 ********** 2026-03-27 00:57:52.287093 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287098 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.287102 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.287106 | orchestrator | 2026-03-27 00:57:52.287115 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-27 00:57:52.287118 | orchestrator | Friday 27 March 2026 00:56:08 +0000 (0:00:00.325) 0:00:03.659 ********** 2026-03-27 00:57:52.287121 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287124 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.287127 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.287130 | orchestrator | 2026-03-27 00:57:52.287133 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-27 00:57:52.287136 | orchestrator | Friday 27 March 2026 00:56:08 +0000 (0:00:00.296) 0:00:03.956 ********** 2026-03-27 00:57:52.287139 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287142 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.287145 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.287148 | orchestrator | 2026-03-27 00:57:52.287151 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-27 00:57:52.287154 | orchestrator | Friday 27 March 2026 00:56:08 +0000 (0:00:00.295) 0:00:04.251 ********** 2026-03-27 00:57:52.287158 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287161 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.287164 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.287167 | orchestrator | 2026-03-27 00:57:52.287170 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-27 00:57:52.287199 | orchestrator | Friday 27 March 2026 00:56:09 +0000 (0:00:00.536) 0:00:04.788 ********** 2026-03-27 00:57:52.287203 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287206 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.287209 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.287212 | orchestrator | 2026-03-27 00:57:52.287215 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-27 00:57:52.287218 | orchestrator | Friday 27 March 2026 00:56:09 +0000 (0:00:00.311) 0:00:05.099 ********** 2026-03-27 00:57:52.287222 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-27 00:57:52.287225 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:57:52.287257 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:57:52.287261 | orchestrator | 2026-03-27 00:57:52.287265 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-27 00:57:52.287268 | orchestrator | Friday 27 March 2026 00:56:10 +0000 (0:00:00.640) 0:00:05.740 ********** 2026-03-27 00:57:52.287271 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287274 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.287277 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.287280 | orchestrator | 2026-03-27 00:57:52.287283 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-27 00:57:52.287287 | orchestrator | Friday 27 March 2026 00:56:10 +0000 (0:00:00.466) 0:00:06.206 ********** 2026-03-27 00:57:52.287290 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-27 00:57:52.287293 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:57:52.287296 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:57:52.287299 | orchestrator | 2026-03-27 00:57:52.287302 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-27 00:57:52.287306 | orchestrator | Friday 27 March 2026 00:56:13 +0000 (0:00:03.057) 0:00:09.264 ********** 2026-03-27 00:57:52.287309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-27 00:57:52.287312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-27 00:57:52.287315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-27 00:57:52.287318 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287572 | orchestrator | 2026-03-27 00:57:52.287590 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-27 00:57:52.287594 | orchestrator | Friday 27 March 2026 00:56:14 +0000 (0:00:00.424) 0:00:09.688 ********** 2026-03-27 00:57:52.287598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.287602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.287606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.287609 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287612 | orchestrator | 2026-03-27 00:57:52.287615 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-27 00:57:52.287618 | orchestrator | Friday 27 March 2026 00:56:15 +0000 (0:00:00.825) 0:00:10.513 ********** 2026-03-27 00:57:52.287622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.287630 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.287638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.287641 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287644 | orchestrator | 2026-03-27 00:57:52.287647 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-27 00:57:52.287650 | orchestrator | Friday 27 March 2026 00:56:15 +0000 (0:00:00.155) 0:00:10.669 ********** 2026-03-27 00:57:52.287654 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7e634cdc2c9d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-27 00:56:11.815523', 'end': '2026-03-27 00:56:11.854327', 'delta': '0:00:00.038804', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7e634cdc2c9d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-27 00:57:52.287658 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fb710f894dfc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-27 00:56:12.930776', 'end': '2026-03-27 00:56:12.955263', 'delta': '0:00:00.024487', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fb710f894dfc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-27 00:57:52.287670 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '07f01a60cd74', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-27 00:56:13.715524', 'end': '2026-03-27 00:56:13.750053', 'delta': '0:00:00.034529', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['07f01a60cd74'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-27 00:57:52.287674 | orchestrator | 2026-03-27 00:57:52.287677 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-27 00:57:52.287680 | orchestrator | Friday 27 March 2026 00:56:15 +0000 (0:00:00.406) 0:00:11.076 ********** 2026-03-27 00:57:52.287683 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287686 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.287689 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.287692 | orchestrator | 2026-03-27 00:57:52.287695 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-27 00:57:52.287698 | orchestrator | Friday 27 March 2026 00:56:16 +0000 (0:00:00.462) 0:00:11.538 ********** 2026-03-27 00:57:52.287701 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-27 00:57:52.287704 | orchestrator | 2026-03-27 00:57:52.287707 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-27 00:57:52.287711 | orchestrator | Friday 27 March 2026 00:56:17 +0000 (0:00:01.486) 0:00:13.024 ********** 2026-03-27 00:57:52.287716 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287719 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.287722 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.287725 | orchestrator | 2026-03-27 00:57:52.287729 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-27 00:57:52.287732 | orchestrator | Friday 27 March 2026 00:56:17 +0000 (0:00:00.313) 0:00:13.338 ********** 2026-03-27 00:57:52.287736 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287762 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.287768 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.287772 | orchestrator | 2026-03-27 00:57:52.287777 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-27 00:57:52.287782 | orchestrator | Friday 27 March 2026 00:56:18 +0000 (0:00:00.420) 0:00:13.759 ********** 2026-03-27 00:57:52.287787 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287792 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.287796 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.287801 | orchestrator | 2026-03-27 00:57:52.287806 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-27 00:57:52.287811 | orchestrator | Friday 27 March 2026 00:56:18 +0000 (0:00:00.534) 0:00:14.293 ********** 2026-03-27 00:57:52.287815 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.287820 | orchestrator | 2026-03-27 00:57:52.287825 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-27 00:57:52.287830 | orchestrator | Friday 27 March 2026 00:56:19 +0000 (0:00:00.115) 0:00:14.409 ********** 2026-03-27 00:57:52.287835 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287841 | orchestrator | 2026-03-27 00:57:52.287894 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-27 00:57:52.287901 | orchestrator | Friday 27 March 2026 00:56:19 +0000 (0:00:00.223) 0:00:14.632 ********** 2026-03-27 00:57:52.287906 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287911 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.287916 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.287921 | orchestrator | 2026-03-27 00:57:52.287926 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-27 00:57:52.287931 | orchestrator | Friday 27 March 2026 00:56:19 +0000 (0:00:00.310) 0:00:14.943 ********** 2026-03-27 00:57:52.287936 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287941 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.287947 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.287952 | orchestrator | 2026-03-27 00:57:52.287957 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-27 00:57:52.287963 | orchestrator | Friday 27 March 2026 00:56:19 +0000 (0:00:00.356) 0:00:15.299 ********** 2026-03-27 00:57:52.287968 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.287973 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.287978 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.287984 | orchestrator | 2026-03-27 00:57:52.287989 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-27 00:57:52.287994 | orchestrator | Friday 27 March 2026 00:56:20 +0000 (0:00:00.570) 0:00:15.870 ********** 2026-03-27 00:57:52.287999 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288004 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288009 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288015 | orchestrator | 2026-03-27 00:57:52.288018 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-27 00:57:52.288021 | orchestrator | Friday 27 March 2026 00:56:20 +0000 (0:00:00.338) 0:00:16.208 ********** 2026-03-27 00:57:52.288024 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288027 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288030 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288033 | orchestrator | 2026-03-27 00:57:52.288036 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-27 00:57:52.288044 | orchestrator | Friday 27 March 2026 00:56:21 +0000 (0:00:00.328) 0:00:16.536 ********** 2026-03-27 00:57:52.288047 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288050 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288053 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288077 | orchestrator | 2026-03-27 00:57:52.288081 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-27 00:57:52.288085 | orchestrator | Friday 27 March 2026 00:56:21 +0000 (0:00:00.310) 0:00:16.846 ********** 2026-03-27 00:57:52.288088 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288091 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288094 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288097 | orchestrator | 2026-03-27 00:57:52.288100 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-27 00:57:52.288103 | orchestrator | Friday 27 March 2026 00:56:22 +0000 (0:00:00.581) 0:00:17.428 ********** 2026-03-27 00:57:52.288107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--49c52ee7--6668--5cd2--bd86--f7267953750e-osd--block--49c52ee7--6668--5cd2--bd86--f7267953750e', 'dm-uuid-LVM-aIeYERUPfSMgKvMUlrUvkdFoiC095wYqmQHJrrTn0jpmHxteM5p3holeBEU1wK52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2cf1a901--b2f7--5490--8423--90f944953f5f-osd--block--2cf1a901--b2f7--5490--8423--90f944953f5f', 'dm-uuid-LVM-oG5nRXfwiEfIyT67me8tDDkp9qe9PZl6uJWjGDHETnsMXx2yJFE6R8tp2wcCvLG6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part1', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part14', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part15', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part16', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--49c52ee7--6668--5cd2--bd86--f7267953750e-osd--block--49c52ee7--6668--5cd2--bd86--f7267953750e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c27DoD-5Xms-HWce-cCFK-RGwJ-OB5L-Wp0aUE', 'scsi-0QEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab', 'scsi-SQEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f-osd--block--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f', 'dm-uuid-LVM-kXpxmk7mM7gsT0IEG34nSngbkTZpbdXRxkZWqd06KQroWKMJAdY7IUK7KXlT0a4X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2cf1a901--b2f7--5490--8423--90f944953f5f-osd--block--2cf1a901--b2f7--5490--8423--90f944953f5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AdpT4M-V1ru-ryF1-yUmX-ps46-3mDd-YCPCY0', 'scsi-0QEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d', 'scsi-SQEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--627e7bc4--4e7d--5af1--903b--8d115676372d-osd--block--627e7bc4--4e7d--5af1--903b--8d115676372d', 'dm-uuid-LVM-tAGTKeLAL1CuimTCxNRF6S7vcoFbSB1IG207gSsYVP7XHnbeEilqW2dICrCpUzDt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26', 'scsi-SQEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288236 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part1', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part14', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part15', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part16', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bb6fbf97--7198--5485--83ee--7be3b389ad62-osd--block--bb6fbf97--7198--5485--83ee--7be3b389ad62', 'dm-uuid-LVM-CjqIlvHeAtR3JbQk0BgFBJxu6DMSkyeQ6Z2BmWlBw0epF9HWYfyR2g1Gee0Y0aRK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f-osd--block--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7mpUVm-WSwP-nQK5-a7bw-t1xe-hN5n-Diz1dd', 'scsi-0QEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231', 'scsi-SQEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331-osd--block--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331', 'dm-uuid-LVM-J1Nq2ec7Gmy9QQADR5bdDjMg13S83C0ff6IWfn1j1PGxmlgMcc6TFvgvCYtuSrhX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--627e7bc4--4e7d--5af1--903b--8d115676372d-osd--block--627e7bc4--4e7d--5af1--903b--8d115676372d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yJHRP5-hv0d-FXuF-M4Vj-N3MC-oEik-gGt0x7', 'scsi-0QEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022', 'scsi-SQEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef', 'scsi-SQEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288319 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-27 00:57:52.288356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part1', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part14', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part15', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part16', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bb6fbf97--7198--5485--83ee--7be3b389ad62-osd--block--bb6fbf97--7198--5485--83ee--7be3b389ad62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0CT0cf-3Djh-G5bQ-hgkl-4qDa-J3jY-vD1h3S', 'scsi-0QEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c', 'scsi-SQEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331-osd--block--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iTweHW-LY1Y-g2UM-sheT-2IyK-2y3c-bkBoq2', 'scsi-0QEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617', 'scsi-SQEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984', 'scsi-SQEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-27 00:57:52.288391 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288397 | orchestrator | 2026-03-27 00:57:52.288400 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-27 00:57:52.288403 | orchestrator | Friday 27 March 2026 00:56:22 +0000 (0:00:00.571) 0:00:17.999 ********** 2026-03-27 00:57:52.288407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--49c52ee7--6668--5cd2--bd86--f7267953750e-osd--block--49c52ee7--6668--5cd2--bd86--f7267953750e', 'dm-uuid-LVM-aIeYERUPfSMgKvMUlrUvkdFoiC095wYqmQHJrrTn0jpmHxteM5p3holeBEU1wK52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288411 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2cf1a901--b2f7--5490--8423--90f944953f5f-osd--block--2cf1a901--b2f7--5490--8423--90f944953f5f', 'dm-uuid-LVM-oG5nRXfwiEfIyT67me8tDDkp9qe9PZl6uJWjGDHETnsMXx2yJFE6R8tp2wcCvLG6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288461 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288473 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288502 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f-osd--block--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f', 'dm-uuid-LVM-kXpxmk7mM7gsT0IEG34nSngbkTZpbdXRxkZWqd06KQroWKMJAdY7IUK7KXlT0a4X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--627e7bc4--4e7d--5af1--903b--8d115676372d-osd--block--627e7bc4--4e7d--5af1--903b--8d115676372d', 'dm-uuid-LVM-tAGTKeLAL1CuimTCxNRF6S7vcoFbSB1IG207gSsYVP7XHnbeEilqW2dICrCpUzDt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part1', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part14', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part15', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part16', 'scsi-SQEMU_QEMU_HARDDISK_4b291496-18ea-45da-96d1-ca760a1ff526-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288534 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--49c52ee7--6668--5cd2--bd86--f7267953750e-osd--block--49c52ee7--6668--5cd2--bd86--f7267953750e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c27DoD-5Xms-HWce-cCFK-RGwJ-OB5L-Wp0aUE', 'scsi-0QEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab', 'scsi-SQEMU_QEMU_HARDDISK_62ab2900-9bbe-4288-89a4-62dba7ae92ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2cf1a901--b2f7--5490--8423--90f944953f5f-osd--block--2cf1a901--b2f7--5490--8423--90f944953f5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AdpT4M-V1ru-ryF1-yUmX-ps46-3mDd-YCPCY0', 'scsi-0QEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d', 'scsi-SQEMU_QEMU_HARDDISK_0ff86b74-b83b-4d7e-b564-01c0b90f308d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288556 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26', 'scsi-SQEMU_QEMU_HARDDISK_52ce1f02-342d-40b1-ab4b-d26aefe85f26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288569 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288576 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288583 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288599 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bb6fbf97--7198--5485--83ee--7be3b389ad62-osd--block--bb6fbf97--7198--5485--83ee--7be3b389ad62', 'dm-uuid-LVM-CjqIlvHeAtR3JbQk0BgFBJxu6DMSkyeQ6Z2BmWlBw0epF9HWYfyR2g1Gee0Y0aRK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288603 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331-osd--block--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331', 'dm-uuid-LVM-J1Nq2ec7Gmy9QQADR5bdDjMg13S83C0ff6IWfn1j1PGxmlgMcc6TFvgvCYtuSrhX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part1', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part14', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part15', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part16', 'scsi-SQEMU_QEMU_HARDDISK_6016cd75-c7c0-403c-b545-4970d85db376-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288635 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288638 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f-osd--block--b8da8e02--1f61--55dd--bf76--a4ff2d17c49f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7mpUVm-WSwP-nQK5-a7bw-t1xe-hN5n-Diz1dd', 'scsi-0QEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231', 'scsi-SQEMU_QEMU_HARDDISK_86c6402f-d184-4443-979d-ecd201841231'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288646 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--627e7bc4--4e7d--5af1--903b--8d115676372d-osd--block--627e7bc4--4e7d--5af1--903b--8d115676372d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yJHRP5-hv0d-FXuF-M4Vj-N3MC-oEik-gGt0x7', 'scsi-0QEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022', 'scsi-SQEMU_QEMU_HARDDISK_131bb9e5-0133-49dd-b67b-125236a47022'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288650 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288653 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef', 'scsi-SQEMU_QEMU_HARDDISK_2796c507-44e5-4ccf-b3e2-014e00eaf9ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288659 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288663 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288669 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288677 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288683 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288689 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288698 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part1', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part14', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part15', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part16', 'scsi-SQEMU_QEMU_HARDDISK_967da385-7d5e-4e32-b850-70936458610b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bb6fbf97--7198--5485--83ee--7be3b389ad62-osd--block--bb6fbf97--7198--5485--83ee--7be3b389ad62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0CT0cf-3Djh-G5bQ-hgkl-4qDa-J3jY-vD1h3S', 'scsi-0QEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c', 'scsi-SQEMU_QEMU_HARDDISK_3878b4cc-7fe4-4758-b0af-fcf7391d431c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331-osd--block--f9aa8e5e--9a1f--5185--aaa5--5b53eb599331'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iTweHW-LY1Y-g2UM-sheT-2IyK-2y3c-bkBoq2', 'scsi-0QEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617', 'scsi-SQEMU_QEMU_HARDDISK_53da1fd0-572d-430c-b2ac-506bde32f617'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288714 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984', 'scsi-SQEMU_QEMU_HARDDISK_3917e6ab-68a3-44be-970a-31d9d2a57984'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-27-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-27 00:57:52.288722 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288725 | orchestrator | 2026-03-27 00:57:52.288728 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-27 00:57:52.288734 | orchestrator | Friday 27 March 2026 00:56:23 +0000 (0:00:00.612) 0:00:18.612 ********** 2026-03-27 00:57:52.288737 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.288740 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.288743 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.288746 | orchestrator | 2026-03-27 00:57:52.288749 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-27 00:57:52.288752 | orchestrator | Friday 27 March 2026 00:56:23 +0000 (0:00:00.667) 0:00:19.279 ********** 2026-03-27 00:57:52.288755 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.288758 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.288761 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.288764 | orchestrator | 2026-03-27 00:57:52.288767 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-27 00:57:52.288770 | orchestrator | Friday 27 March 2026 00:56:24 +0000 (0:00:00.517) 0:00:19.796 ********** 2026-03-27 00:57:52.288773 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.288776 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.288779 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.288782 | orchestrator | 2026-03-27 00:57:52.288785 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-27 00:57:52.288788 | orchestrator | Friday 27 March 2026 00:56:25 +0000 (0:00:00.683) 0:00:20.479 ********** 2026-03-27 00:57:52.288791 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288794 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288797 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288800 | orchestrator | 2026-03-27 00:57:52.288805 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-27 00:57:52.288808 | orchestrator | Friday 27 March 2026 00:56:25 +0000 (0:00:00.282) 0:00:20.762 ********** 2026-03-27 00:57:52.288811 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288814 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288817 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288820 | orchestrator | 2026-03-27 00:57:52.288823 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-27 00:57:52.288826 | orchestrator | Friday 27 March 2026 00:56:25 +0000 (0:00:00.414) 0:00:21.177 ********** 2026-03-27 00:57:52.288829 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288832 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288835 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288838 | orchestrator | 2026-03-27 00:57:52.288841 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-27 00:57:52.288857 | orchestrator | Friday 27 March 2026 00:56:26 +0000 (0:00:00.631) 0:00:21.809 ********** 2026-03-27 00:57:52.288864 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-27 00:57:52.288869 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-27 00:57:52.288873 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-27 00:57:52.288876 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-27 00:57:52.288879 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-27 00:57:52.288882 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-27 00:57:52.288885 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-27 00:57:52.288888 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-27 00:57:52.288891 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-27 00:57:52.288894 | orchestrator | 2026-03-27 00:57:52.288897 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-27 00:57:52.288900 | orchestrator | Friday 27 March 2026 00:56:27 +0000 (0:00:00.855) 0:00:22.664 ********** 2026-03-27 00:57:52.288903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-27 00:57:52.288906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-27 00:57:52.288909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-27 00:57:52.288912 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288918 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-27 00:57:52.288921 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-27 00:57:52.288924 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-27 00:57:52.288927 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288930 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-27 00:57:52.288933 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-27 00:57:52.288936 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-27 00:57:52.288939 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288942 | orchestrator | 2026-03-27 00:57:52.288945 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-27 00:57:52.288948 | orchestrator | Friday 27 March 2026 00:56:27 +0000 (0:00:00.412) 0:00:23.077 ********** 2026-03-27 00:57:52.288952 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 00:57:52.288955 | orchestrator | 2026-03-27 00:57:52.288958 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-27 00:57:52.288962 | orchestrator | Friday 27 March 2026 00:56:28 +0000 (0:00:00.786) 0:00:23.863 ********** 2026-03-27 00:57:52.288967 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288970 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288973 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288976 | orchestrator | 2026-03-27 00:57:52.288979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-27 00:57:52.288982 | orchestrator | Friday 27 March 2026 00:56:28 +0000 (0:00:00.321) 0:00:24.185 ********** 2026-03-27 00:57:52.288986 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.288989 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.288992 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.288995 | orchestrator | 2026-03-27 00:57:52.288998 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-27 00:57:52.289001 | orchestrator | Friday 27 March 2026 00:56:29 +0000 (0:00:00.315) 0:00:24.500 ********** 2026-03-27 00:57:52.289004 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.289007 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.289010 | orchestrator | skipping: [testbed-node-5] 2026-03-27 00:57:52.289013 | orchestrator | 2026-03-27 00:57:52.289016 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-27 00:57:52.289019 | orchestrator | Friday 27 March 2026 00:56:29 +0000 (0:00:00.324) 0:00:24.825 ********** 2026-03-27 00:57:52.289022 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.289025 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.289028 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.289031 | orchestrator | 2026-03-27 00:57:52.289035 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-27 00:57:52.289038 | orchestrator | Friday 27 March 2026 00:56:30 +0000 (0:00:00.633) 0:00:25.459 ********** 2026-03-27 00:57:52.289041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:57:52.289044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:57:52.289047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:57:52.289050 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.289053 | orchestrator | 2026-03-27 00:57:52.289056 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-27 00:57:52.289059 | orchestrator | Friday 27 March 2026 00:56:30 +0000 (0:00:00.458) 0:00:25.918 ********** 2026-03-27 00:57:52.289062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:57:52.289067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:57:52.289070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:57:52.289075 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.289079 | orchestrator | 2026-03-27 00:57:52.289082 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-27 00:57:52.289085 | orchestrator | Friday 27 March 2026 00:56:30 +0000 (0:00:00.405) 0:00:26.324 ********** 2026-03-27 00:57:52.289088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-27 00:57:52.289091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-27 00:57:52.289094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-27 00:57:52.289097 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.289100 | orchestrator | 2026-03-27 00:57:52.289103 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-27 00:57:52.289106 | orchestrator | Friday 27 March 2026 00:56:31 +0000 (0:00:00.378) 0:00:26.703 ********** 2026-03-27 00:57:52.289109 | orchestrator | ok: [testbed-node-3] 2026-03-27 00:57:52.289112 | orchestrator | ok: [testbed-node-4] 2026-03-27 00:57:52.289115 | orchestrator | ok: [testbed-node-5] 2026-03-27 00:57:52.289118 | orchestrator | 2026-03-27 00:57:52.289121 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-27 00:57:52.289125 | orchestrator | Friday 27 March 2026 00:56:31 +0000 (0:00:00.317) 0:00:27.021 ********** 2026-03-27 00:57:52.289128 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-27 00:57:52.289133 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-27 00:57:52.289138 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-27 00:57:52.289143 | orchestrator | 2026-03-27 00:57:52.289148 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-27 00:57:52.289153 | orchestrator | Friday 27 March 2026 00:56:32 +0000 (0:00:00.508) 0:00:27.529 ********** 2026-03-27 00:57:52.289159 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-27 00:57:52.289163 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:57:52.289166 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:57:52.289169 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-27 00:57:52.289172 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-27 00:57:52.289175 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-27 00:57:52.289178 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-27 00:57:52.289181 | orchestrator | 2026-03-27 00:57:52.289185 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-27 00:57:52.289188 | orchestrator | Friday 27 March 2026 00:56:33 +0000 (0:00:01.020) 0:00:28.549 ********** 2026-03-27 00:57:52.289191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-27 00:57:52.289194 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-27 00:57:52.289197 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-27 00:57:52.289200 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-27 00:57:52.289203 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-27 00:57:52.289206 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-27 00:57:52.289211 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-27 00:57:52.289214 | orchestrator | 2026-03-27 00:57:52.289218 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-27 00:57:52.289221 | orchestrator | Friday 27 March 2026 00:56:35 +0000 (0:00:02.104) 0:00:30.654 ********** 2026-03-27 00:57:52.289224 | orchestrator | skipping: [testbed-node-3] 2026-03-27 00:57:52.289227 | orchestrator | skipping: [testbed-node-4] 2026-03-27 00:57:52.289230 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-27 00:57:52.289235 | orchestrator | 2026-03-27 00:57:52.289239 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-27 00:57:52.289242 | orchestrator | Friday 27 March 2026 00:56:35 +0000 (0:00:00.381) 0:00:31.036 ********** 2026-03-27 00:57:52.289245 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-27 00:57:52.289248 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-27 00:57:52.289251 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-27 00:57:52.289256 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-27 00:57:52.289259 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-27 00:57:52.289262 | orchestrator | 2026-03-27 00:57:52.289266 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-27 00:57:52.289269 | orchestrator | Friday 27 March 2026 00:57:09 +0000 (0:00:34.025) 0:01:05.062 ********** 2026-03-27 00:57:52.289272 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289275 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289278 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289281 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289284 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289287 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289290 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-27 00:57:52.289293 | orchestrator | 2026-03-27 00:57:52.289296 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-27 00:57:52.289299 | orchestrator | Friday 27 March 2026 00:57:25 +0000 (0:00:15.524) 0:01:20.586 ********** 2026-03-27 00:57:52.289302 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289305 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289308 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289311 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289314 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289317 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289321 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-27 00:57:52.289324 | orchestrator | 2026-03-27 00:57:52.289327 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-27 00:57:52.289334 | orchestrator | Friday 27 March 2026 00:57:33 +0000 (0:00:08.358) 0:01:28.944 ********** 2026-03-27 00:57:52.289339 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289344 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-27 00:57:52.289353 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-27 00:57:52.289358 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289362 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-27 00:57:52.289370 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-27 00:57:52.289375 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289381 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-27 00:57:52.289384 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-27 00:57:52.289387 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289390 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-27 00:57:52.289393 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-27 00:57:52.289397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289400 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-27 00:57:52.289403 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-27 00:57:52.289406 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-27 00:57:52.289409 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-27 00:57:52.289412 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-27 00:57:52.289415 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-27 00:57:52.289418 | orchestrator | 2026-03-27 00:57:52.289421 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:57:52.289424 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-27 00:57:52.289427 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-27 00:57:52.289433 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-27 00:57:52.289436 | orchestrator | 2026-03-27 00:57:52.289439 | orchestrator | 2026-03-27 00:57:52.289442 | orchestrator | 2026-03-27 00:57:52.289445 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:57:52.289448 | orchestrator | Friday 27 March 2026 00:57:49 +0000 (0:00:16.027) 0:01:44.971 ********** 2026-03-27 00:57:52.289451 | orchestrator | =============================================================================== 2026-03-27 00:57:52.289454 | orchestrator | create openstack pool(s) ----------------------------------------------- 34.03s 2026-03-27 00:57:52.289457 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.03s 2026-03-27 00:57:52.289460 | orchestrator | generate keys ---------------------------------------------------------- 15.52s 2026-03-27 00:57:52.289463 | orchestrator | get keys from monitors -------------------------------------------------- 8.36s 2026-03-27 00:57:52.289466 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.06s 2026-03-27 00:57:52.289470 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.10s 2026-03-27 00:57:52.289473 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.49s 2026-03-27 00:57:52.289478 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2026-03-27 00:57:52.289483 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.00s 2026-03-27 00:57:52.289488 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2026-03-27 00:57:52.289493 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2026-03-27 00:57:52.289498 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.83s 2026-03-27 00:57:52.289502 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.79s 2026-03-27 00:57:52.289507 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-03-27 00:57:52.289512 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2026-03-27 00:57:52.289517 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2026-03-27 00:57:52.289522 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2026-03-27 00:57:52.289527 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2026-03-27 00:57:52.289532 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.63s 2026-03-27 00:57:52.289537 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.61s 2026-03-27 00:57:52.289540 | orchestrator | 2026-03-27 00:57:52 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:57:52.290887 | orchestrator | 2026-03-27 00:57:52 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:52.290910 | orchestrator | 2026-03-27 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:55.343007 | orchestrator | 2026-03-27 00:57:55 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:55.345484 | orchestrator | 2026-03-27 00:57:55 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:57:55.349541 | orchestrator | 2026-03-27 00:57:55 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:55.349588 | orchestrator | 2026-03-27 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:57:58.398326 | orchestrator | 2026-03-27 00:57:58 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:57:58.401826 | orchestrator | 2026-03-27 00:57:58 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:57:58.403731 | orchestrator | 2026-03-27 00:57:58 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:57:58.403778 | orchestrator | 2026-03-27 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:01.448531 | orchestrator | 2026-03-27 00:58:01 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:58:01.450208 | orchestrator | 2026-03-27 00:58:01 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:58:01.451934 | orchestrator | 2026-03-27 00:58:01 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:01.452135 | orchestrator | 2026-03-27 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:04.506654 | orchestrator | 2026-03-27 00:58:04 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:58:04.506697 | orchestrator | 2026-03-27 00:58:04 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:58:04.510250 | orchestrator | 2026-03-27 00:58:04 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:04.510302 | orchestrator | 2026-03-27 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:07.567329 | orchestrator | 2026-03-27 00:58:07 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:58:07.568685 | orchestrator | 2026-03-27 00:58:07 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:58:07.570684 | orchestrator | 2026-03-27 00:58:07 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:07.570906 | orchestrator | 2026-03-27 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:10.627386 | orchestrator | 2026-03-27 00:58:10 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:58:10.628622 | orchestrator | 2026-03-27 00:58:10 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:58:10.630174 | orchestrator | 2026-03-27 00:58:10 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:10.630396 | orchestrator | 2026-03-27 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:13.685498 | orchestrator | 2026-03-27 00:58:13 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:58:13.686605 | orchestrator | 2026-03-27 00:58:13 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:58:13.689168 | orchestrator | 2026-03-27 00:58:13 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:13.689582 | orchestrator | 2026-03-27 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:16.730830 | orchestrator | 2026-03-27 00:58:16 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:58:16.731381 | orchestrator | 2026-03-27 00:58:16 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:58:16.734043 | orchestrator | 2026-03-27 00:58:16 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:16.734084 | orchestrator | 2026-03-27 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:19.769660 | orchestrator | 2026-03-27 00:58:19 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state STARTED 2026-03-27 00:58:19.771382 | orchestrator | 2026-03-27 00:58:19 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:58:19.773551 | orchestrator | 2026-03-27 00:58:19 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:19.773875 | orchestrator | 2026-03-27 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:22.820926 | orchestrator | 2026-03-27 00:58:22 | INFO  | Task d7ccbed7-35e7-4892-bb9d-79b0975a6a59 is in state SUCCESS 2026-03-27 00:58:22.822777 | orchestrator | 2026-03-27 00:58:22.822892 | orchestrator | 2026-03-27 00:58:22.822906 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:58:22.822914 | orchestrator | 2026-03-27 00:58:22.822921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:58:22.822927 | orchestrator | Friday 27 March 2026 00:56:54 +0000 (0:00:00.283) 0:00:00.283 ********** 2026-03-27 00:58:22.822934 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.822940 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.822946 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.822952 | orchestrator | 2026-03-27 00:58:22.822958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:58:22.822964 | orchestrator | Friday 27 March 2026 00:56:54 +0000 (0:00:00.304) 0:00:00.587 ********** 2026-03-27 00:58:22.822970 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-27 00:58:22.822976 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-27 00:58:22.823000 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-27 00:58:22.823007 | orchestrator | 2026-03-27 00:58:22.823013 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-27 00:58:22.823019 | orchestrator | 2026-03-27 00:58:22.823025 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-27 00:58:22.823031 | orchestrator | Friday 27 March 2026 00:56:55 +0000 (0:00:00.272) 0:00:00.859 ********** 2026-03-27 00:58:22.823037 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:58:22.823044 | orchestrator | 2026-03-27 00:58:22.823051 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-27 00:58:22.823056 | orchestrator | Friday 27 March 2026 00:56:55 +0000 (0:00:00.513) 0:00:01.373 ********** 2026-03-27 00:58:22.823075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:58:22.823097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:58:22.823195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:58:22.823204 | orchestrator | 2026-03-27 00:58:22.823346 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-27 00:58:22.823360 | orchestrator | Friday 27 March 2026 00:56:57 +0000 (0:00:01.408) 0:00:02.781 ********** 2026-03-27 00:58:22.823366 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.823373 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.823378 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.823384 | orchestrator | 2026-03-27 00:58:22.823391 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-27 00:58:22.823398 | orchestrator | Friday 27 March 2026 00:56:57 +0000 (0:00:00.253) 0:00:03.035 ********** 2026-03-27 00:58:22.823403 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-27 00:58:22.823418 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-27 00:58:22.823422 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-27 00:58:22.823426 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-27 00:58:22.823430 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-27 00:58:22.823433 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-27 00:58:22.823437 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-27 00:58:22.823441 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-27 00:58:22.823444 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-27 00:58:22.823448 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-27 00:58:22.823452 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-27 00:58:22.823455 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-27 00:58:22.823459 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-27 00:58:22.823463 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-27 00:58:22.823466 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-27 00:58:22.823470 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-27 00:58:22.823474 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-27 00:58:22.823477 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-27 00:58:22.823481 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-27 00:58:22.823485 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-27 00:58:22.823492 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-27 00:58:22.823496 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-27 00:58:22.823500 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-27 00:58:22.823504 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-27 00:58:22.823508 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-27 00:58:22.823512 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-27 00:58:22.823516 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-27 00:58:22.823520 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-27 00:58:22.823524 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-27 00:58:22.823528 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-27 00:58:22.823531 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-27 00:58:22.823537 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-27 00:58:22.823541 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-27 00:58:22.823588 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-27 00:58:22.823597 | orchestrator | 2026-03-27 00:58:22.823603 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.823609 | orchestrator | Friday 27 March 2026 00:56:58 +0000 (0:00:00.601) 0:00:03.636 ********** 2026-03-27 00:58:22.823615 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.823620 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.823626 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.823632 | orchestrator | 2026-03-27 00:58:22.823637 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.823643 | orchestrator | Friday 27 March 2026 00:56:58 +0000 (0:00:00.391) 0:00:04.027 ********** 2026-03-27 00:58:22.823649 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.823655 | orchestrator | 2026-03-27 00:58:22.823666 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.823672 | orchestrator | Friday 27 March 2026 00:56:58 +0000 (0:00:00.127) 0:00:04.155 ********** 2026-03-27 00:58:22.823677 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.823684 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.823690 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.823696 | orchestrator | 2026-03-27 00:58:22.823702 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.823709 | orchestrator | Friday 27 March 2026 00:56:58 +0000 (0:00:00.265) 0:00:04.421 ********** 2026-03-27 00:58:22.823716 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.823722 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.823726 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.823729 | orchestrator | 2026-03-27 00:58:22.823733 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.823737 | orchestrator | Friday 27 March 2026 00:56:59 +0000 (0:00:00.275) 0:00:04.696 ********** 2026-03-27 00:58:22.823741 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.823744 | orchestrator | 2026-03-27 00:58:22.823748 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.823752 | orchestrator | Friday 27 March 2026 00:56:59 +0000 (0:00:00.104) 0:00:04.800 ********** 2026-03-27 00:58:22.823755 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.823759 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.823763 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.823766 | orchestrator | 2026-03-27 00:58:22.823770 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.823774 | orchestrator | Friday 27 March 2026 00:56:59 +0000 (0:00:00.387) 0:00:05.188 ********** 2026-03-27 00:58:22.823778 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.823781 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.823785 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.823789 | orchestrator | 2026-03-27 00:58:22.823792 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.823796 | orchestrator | Friday 27 March 2026 00:56:59 +0000 (0:00:00.244) 0:00:05.432 ********** 2026-03-27 00:58:22.823800 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.823803 | orchestrator | 2026-03-27 00:58:22.823807 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.823821 | orchestrator | Friday 27 March 2026 00:56:59 +0000 (0:00:00.098) 0:00:05.530 ********** 2026-03-27 00:58:22.823853 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.823859 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.823870 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.823876 | orchestrator | 2026-03-27 00:58:22.823882 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.823888 | orchestrator | Friday 27 March 2026 00:57:00 +0000 (0:00:00.248) 0:00:05.779 ********** 2026-03-27 00:58:22.823894 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.823901 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.823907 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.823914 | orchestrator | 2026-03-27 00:58:22.823919 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.823923 | orchestrator | Friday 27 March 2026 00:57:00 +0000 (0:00:00.279) 0:00:06.058 ********** 2026-03-27 00:58:22.823926 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.823930 | orchestrator | 2026-03-27 00:58:22.823934 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.823938 | orchestrator | Friday 27 March 2026 00:57:00 +0000 (0:00:00.156) 0:00:06.214 ********** 2026-03-27 00:58:22.823941 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.823945 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.823949 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.823952 | orchestrator | 2026-03-27 00:58:22.823956 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.823960 | orchestrator | Friday 27 March 2026 00:57:01 +0000 (0:00:00.479) 0:00:06.694 ********** 2026-03-27 00:58:22.823964 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.823967 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.823971 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.823975 | orchestrator | 2026-03-27 00:58:22.823979 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.823982 | orchestrator | Friday 27 March 2026 00:57:01 +0000 (0:00:00.322) 0:00:07.017 ********** 2026-03-27 00:58:22.823986 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.823990 | orchestrator | 2026-03-27 00:58:22.823994 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.823997 | orchestrator | Friday 27 March 2026 00:57:01 +0000 (0:00:00.129) 0:00:07.146 ********** 2026-03-27 00:58:22.824001 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824005 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824008 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824012 | orchestrator | 2026-03-27 00:58:22.824016 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.824020 | orchestrator | Friday 27 March 2026 00:57:01 +0000 (0:00:00.313) 0:00:07.460 ********** 2026-03-27 00:58:22.824023 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.824027 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.824031 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.824035 | orchestrator | 2026-03-27 00:58:22.824038 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.824042 | orchestrator | Friday 27 March 2026 00:57:02 +0000 (0:00:00.521) 0:00:07.981 ********** 2026-03-27 00:58:22.824046 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824050 | orchestrator | 2026-03-27 00:58:22.824056 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.824062 | orchestrator | Friday 27 March 2026 00:57:02 +0000 (0:00:00.119) 0:00:08.101 ********** 2026-03-27 00:58:22.824071 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824079 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824084 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824089 | orchestrator | 2026-03-27 00:58:22.824095 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.824105 | orchestrator | Friday 27 March 2026 00:57:02 +0000 (0:00:00.292) 0:00:08.394 ********** 2026-03-27 00:58:22.824111 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.824117 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.824122 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.824132 | orchestrator | 2026-03-27 00:58:22.824139 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.824146 | orchestrator | Friday 27 March 2026 00:57:03 +0000 (0:00:00.330) 0:00:08.724 ********** 2026-03-27 00:58:22.824152 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824158 | orchestrator | 2026-03-27 00:58:22.824166 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.824170 | orchestrator | Friday 27 March 2026 00:57:03 +0000 (0:00:00.158) 0:00:08.883 ********** 2026-03-27 00:58:22.824173 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824177 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824181 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824184 | orchestrator | 2026-03-27 00:58:22.824188 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.824192 | orchestrator | Friday 27 March 2026 00:57:03 +0000 (0:00:00.349) 0:00:09.233 ********** 2026-03-27 00:58:22.824195 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.824199 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.824203 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.824206 | orchestrator | 2026-03-27 00:58:22.824210 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.824214 | orchestrator | Friday 27 March 2026 00:57:04 +0000 (0:00:00.483) 0:00:09.717 ********** 2026-03-27 00:58:22.824217 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824221 | orchestrator | 2026-03-27 00:58:22.824225 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.824229 | orchestrator | Friday 27 March 2026 00:57:04 +0000 (0:00:00.123) 0:00:09.840 ********** 2026-03-27 00:58:22.824234 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824238 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824242 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824246 | orchestrator | 2026-03-27 00:58:22.824250 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.824254 | orchestrator | Friday 27 March 2026 00:57:04 +0000 (0:00:00.234) 0:00:10.074 ********** 2026-03-27 00:58:22.824259 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.824263 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.824267 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.824271 | orchestrator | 2026-03-27 00:58:22.824278 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.824283 | orchestrator | Friday 27 March 2026 00:57:04 +0000 (0:00:00.243) 0:00:10.318 ********** 2026-03-27 00:58:22.824287 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824291 | orchestrator | 2026-03-27 00:58:22.824296 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.824300 | orchestrator | Friday 27 March 2026 00:57:04 +0000 (0:00:00.105) 0:00:10.423 ********** 2026-03-27 00:58:22.824304 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824308 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824312 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824317 | orchestrator | 2026-03-27 00:58:22.824321 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-27 00:58:22.824325 | orchestrator | Friday 27 March 2026 00:57:05 +0000 (0:00:00.299) 0:00:10.723 ********** 2026-03-27 00:58:22.824329 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:58:22.824334 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:58:22.824338 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:58:22.824342 | orchestrator | 2026-03-27 00:58:22.824346 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-27 00:58:22.824351 | orchestrator | Friday 27 March 2026 00:57:05 +0000 (0:00:00.464) 0:00:11.187 ********** 2026-03-27 00:58:22.824356 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824363 | orchestrator | 2026-03-27 00:58:22.824372 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-27 00:58:22.824380 | orchestrator | Friday 27 March 2026 00:57:05 +0000 (0:00:00.120) 0:00:11.307 ********** 2026-03-27 00:58:22.824390 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824396 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824402 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824408 | orchestrator | 2026-03-27 00:58:22.824415 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-27 00:58:22.824421 | orchestrator | Friday 27 March 2026 00:57:05 +0000 (0:00:00.253) 0:00:11.560 ********** 2026-03-27 00:58:22.824428 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:58:22.824435 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:58:22.824441 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:58:22.824448 | orchestrator | 2026-03-27 00:58:22.824454 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-27 00:58:22.824463 | orchestrator | Friday 27 March 2026 00:57:07 +0000 (0:00:01.686) 0:00:13.247 ********** 2026-03-27 00:58:22.824470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-27 00:58:22.824476 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-27 00:58:22.824482 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-27 00:58:22.824489 | orchestrator | 2026-03-27 00:58:22.824496 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-27 00:58:22.824502 | orchestrator | Friday 27 March 2026 00:57:10 +0000 (0:00:02.493) 0:00:15.741 ********** 2026-03-27 00:58:22.824509 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-27 00:58:22.824517 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-27 00:58:22.824523 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-27 00:58:22.824533 | orchestrator | 2026-03-27 00:58:22.824540 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-27 00:58:22.824551 | orchestrator | Friday 27 March 2026 00:57:12 +0000 (0:00:02.090) 0:00:17.832 ********** 2026-03-27 00:58:22.824558 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-27 00:58:22.824564 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-27 00:58:22.824569 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-27 00:58:22.824575 | orchestrator | 2026-03-27 00:58:22.824582 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-27 00:58:22.824588 | orchestrator | Friday 27 March 2026 00:57:13 +0000 (0:00:01.444) 0:00:19.276 ********** 2026-03-27 00:58:22.824595 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824602 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824607 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824610 | orchestrator | 2026-03-27 00:58:22.824614 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-27 00:58:22.824618 | orchestrator | Friday 27 March 2026 00:57:13 +0000 (0:00:00.294) 0:00:19.571 ********** 2026-03-27 00:58:22.824622 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824625 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824629 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824633 | orchestrator | 2026-03-27 00:58:22.824636 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-27 00:58:22.824640 | orchestrator | Friday 27 March 2026 00:57:14 +0000 (0:00:00.287) 0:00:19.859 ********** 2026-03-27 00:58:22.824644 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:58:22.824648 | orchestrator | 2026-03-27 00:58:22.824651 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-27 00:58:22.824659 | orchestrator | Friday 27 March 2026 00:57:15 +0000 (0:00:00.852) 0:00:20.711 ********** 2026-03-27 00:58:22.824667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:58:22.824678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:58:22.824690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:58:22.824695 | orchestrator | 2026-03-27 00:58:22.824699 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-27 00:58:22.824702 | orchestrator | Friday 27 March 2026 00:57:16 +0000 (0:00:01.312) 0:00:22.023 ********** 2026-03-27 00:58:22.824713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-27 00:58:22.824721 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-27 00:58:22.824732 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-27 00:58:22.824745 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824749 | orchestrator | 2026-03-27 00:58:22.824753 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-27 00:58:22.824756 | orchestrator | Friday 27 March 2026 00:57:17 +0000 (0:00:00.714) 0:00:22.737 ********** 2026-03-27 00:58:22.824764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-27 00:58:22.824768 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-27 00:58:22.824781 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-27 00:58:22.824795 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824799 | orchestrator | 2026-03-27 00:58:22.824802 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-27 00:58:22.824806 | orchestrator | Friday 27 March 2026 00:57:18 +0000 (0:00:00.937) 0:00:23.675 ********** 2026-03-27 00:58:22.824812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:58:22.824820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:58:22.824829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-27 00:58:22.824852 | orchestrator | 2026-03-27 00:58:22.824856 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-27 00:58:22.824860 | orchestrator | Friday 27 March 2026 00:57:19 +0000 (0:00:01.041) 0:00:24.716 ********** 2026-03-27 00:58:22.824864 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:58:22.824867 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:58:22.824871 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:58:22.824875 | orchestrator | 2026-03-27 00:58:22.824879 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-27 00:58:22.824882 | orchestrator | Friday 27 March 2026 00:57:19 +0000 (0:00:00.272) 0:00:24.988 ********** 2026-03-27 00:58:22.824886 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:58:22.824890 | orchestrator | 2026-03-27 00:58:22.824894 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-27 00:58:22.824900 | orchestrator | Friday 27 March 2026 00:57:20 +0000 (0:00:00.675) 0:00:25.664 ********** 2026-03-27 00:58:22.824904 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:58:22.824910 | orchestrator | 2026-03-27 00:58:22.824914 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-27 00:58:22.824918 | orchestrator | Friday 27 March 2026 00:57:22 +0000 (0:00:02.156) 0:00:27.820 ********** 2026-03-27 00:58:22.824922 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:58:22.824925 | orchestrator | 2026-03-27 00:58:22.824929 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-27 00:58:22.824933 | orchestrator | Friday 27 March 2026 00:57:24 +0000 (0:00:02.244) 0:00:30.065 ********** 2026-03-27 00:58:22.824937 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:58:22.824940 | orchestrator | 2026-03-27 00:58:22.824944 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-27 00:58:22.824948 | orchestrator | Friday 27 March 2026 00:57:37 +0000 (0:00:13.401) 0:00:43.466 ********** 2026-03-27 00:58:22.824952 | orchestrator | 2026-03-27 00:58:22.824955 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-27 00:58:22.824959 | orchestrator | Friday 27 March 2026 00:57:37 +0000 (0:00:00.063) 0:00:43.530 ********** 2026-03-27 00:58:22.824964 | orchestrator | 2026-03-27 00:58:22.824970 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-27 00:58:22.824979 | orchestrator | Friday 27 March 2026 00:57:37 +0000 (0:00:00.062) 0:00:43.592 ********** 2026-03-27 00:58:22.824986 | orchestrator | 2026-03-27 00:58:22.824992 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-27 00:58:22.824998 | orchestrator | Friday 27 March 2026 00:57:38 +0000 (0:00:00.066) 0:00:43.659 ********** 2026-03-27 00:58:22.825005 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:58:22.825012 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:58:22.825018 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:58:22.825024 | orchestrator | 2026-03-27 00:58:22.825031 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:58:22.825036 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-27 00:58:22.825040 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-27 00:58:22.825047 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-27 00:58:22.825051 | orchestrator | 2026-03-27 00:58:22.825055 | orchestrator | 2026-03-27 00:58:22.825058 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:58:22.825062 | orchestrator | Friday 27 March 2026 00:58:20 +0000 (0:00:42.556) 0:01:26.216 ********** 2026-03-27 00:58:22.825066 | orchestrator | =============================================================================== 2026-03-27 00:58:22.825070 | orchestrator | horizon : Restart horizon container ------------------------------------ 42.56s 2026-03-27 00:58:22.825073 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.40s 2026-03-27 00:58:22.825077 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.49s 2026-03-27 00:58:22.825081 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.24s 2026-03-27 00:58:22.825084 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.16s 2026-03-27 00:58:22.825088 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.09s 2026-03-27 00:58:22.825092 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.69s 2026-03-27 00:58:22.825096 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.44s 2026-03-27 00:58:22.825099 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.41s 2026-03-27 00:58:22.825103 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.31s 2026-03-27 00:58:22.825107 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.04s 2026-03-27 00:58:22.825114 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.94s 2026-03-27 00:58:22.825118 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2026-03-27 00:58:22.825121 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-03-27 00:58:22.825125 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2026-03-27 00:58:22.825129 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-03-27 00:58:22.825133 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-03-27 00:58:22.825136 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2026-03-27 00:58:22.825140 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2026-03-27 00:58:22.825144 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2026-03-27 00:58:22.825148 | orchestrator | 2026-03-27 00:58:22 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:58:22.825505 | orchestrator | 2026-03-27 00:58:22 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:22.825516 | orchestrator | 2026-03-27 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:25.872142 | orchestrator | 2026-03-27 00:58:25 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state STARTED 2026-03-27 00:58:25.874449 | orchestrator | 2026-03-27 00:58:25 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:25.874498 | orchestrator | 2026-03-27 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:28.916471 | orchestrator | 2026-03-27 00:58:28 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:28.917240 | orchestrator | 2026-03-27 00:58:28 | INFO  | Task 50e6aa7b-f3fa-4f0c-a403-b1aa11c09dc2 is in state SUCCESS 2026-03-27 00:58:28.919591 | orchestrator | 2026-03-27 00:58:28 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:28.919701 | orchestrator | 2026-03-27 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:31.961811 | orchestrator | 2026-03-27 00:58:31 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:31.963554 | orchestrator | 2026-03-27 00:58:31 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:31.963598 | orchestrator | 2026-03-27 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:34.996816 | orchestrator | 2026-03-27 00:58:34 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:34.998975 | orchestrator | 2026-03-27 00:58:34 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:34.999019 | orchestrator | 2026-03-27 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:38.048731 | orchestrator | 2026-03-27 00:58:38 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:38.052311 | orchestrator | 2026-03-27 00:58:38 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:38.053427 | orchestrator | 2026-03-27 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:41.123973 | orchestrator | 2026-03-27 00:58:41 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:41.124023 | orchestrator | 2026-03-27 00:58:41 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:41.124029 | orchestrator | 2026-03-27 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:44.159015 | orchestrator | 2026-03-27 00:58:44 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:44.161132 | orchestrator | 2026-03-27 00:58:44 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:44.161180 | orchestrator | 2026-03-27 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:47.217986 | orchestrator | 2026-03-27 00:58:47 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:47.220179 | orchestrator | 2026-03-27 00:58:47 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:47.220368 | orchestrator | 2026-03-27 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:50.274718 | orchestrator | 2026-03-27 00:58:50 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:50.277788 | orchestrator | 2026-03-27 00:58:50 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:50.277896 | orchestrator | 2026-03-27 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:53.323178 | orchestrator | 2026-03-27 00:58:53 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:53.324366 | orchestrator | 2026-03-27 00:58:53 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:53.324414 | orchestrator | 2026-03-27 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:56.389590 | orchestrator | 2026-03-27 00:58:56 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:56.391171 | orchestrator | 2026-03-27 00:58:56 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:56.391214 | orchestrator | 2026-03-27 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:58:59.452086 | orchestrator | 2026-03-27 00:58:59 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:58:59.454097 | orchestrator | 2026-03-27 00:58:59 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:58:59.454188 | orchestrator | 2026-03-27 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:02.507070 | orchestrator | 2026-03-27 00:59:02 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:59:02.507153 | orchestrator | 2026-03-27 00:59:02 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:59:02.507163 | orchestrator | 2026-03-27 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:05.550806 | orchestrator | 2026-03-27 00:59:05 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:59:05.552134 | orchestrator | 2026-03-27 00:59:05 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:59:05.552190 | orchestrator | 2026-03-27 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:08.595852 | orchestrator | 2026-03-27 00:59:08 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:59:08.598516 | orchestrator | 2026-03-27 00:59:08 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:59:08.598591 | orchestrator | 2026-03-27 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:11.643489 | orchestrator | 2026-03-27 00:59:11 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:59:11.644766 | orchestrator | 2026-03-27 00:59:11 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:59:11.644994 | orchestrator | 2026-03-27 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:14.691647 | orchestrator | 2026-03-27 00:59:14 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:59:14.694629 | orchestrator | 2026-03-27 00:59:14 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:59:14.694699 | orchestrator | 2026-03-27 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:17.733201 | orchestrator | 2026-03-27 00:59:17 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:59:17.735162 | orchestrator | 2026-03-27 00:59:17 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:59:17.735233 | orchestrator | 2026-03-27 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:20.774059 | orchestrator | 2026-03-27 00:59:20 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:59:20.775831 | orchestrator | 2026-03-27 00:59:20 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:59:20.775899 | orchestrator | 2026-03-27 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:23.825484 | orchestrator | 2026-03-27 00:59:23 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state STARTED 2026-03-27 00:59:23.826191 | orchestrator | 2026-03-27 00:59:23 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state STARTED 2026-03-27 00:59:23.826234 | orchestrator | 2026-03-27 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:26.852848 | orchestrator | 2026-03-27 00:59:26 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:26.853908 | orchestrator | 2026-03-27 00:59:26 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:26.854577 | orchestrator | 2026-03-27 00:59:26 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:26.855761 | orchestrator | 2026-03-27 00:59:26 | INFO  | Task 92d244c1-b97b-4e4d-9a4d-3c3c2f666dc9 is in state SUCCESS 2026-03-27 00:59:26.855992 | orchestrator | 2026-03-27 00:59:26.856016 | orchestrator | 2026-03-27 00:59:26.856025 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-27 00:59:26.856035 | orchestrator | 2026-03-27 00:59:26.856043 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-27 00:59:26.856051 | orchestrator | Friday 27 March 2026 00:57:53 +0000 (0:00:00.252) 0:00:00.252 ********** 2026-03-27 00:59:26.856059 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-27 00:59:26.856069 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856077 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856085 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-27 00:59:26.856092 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856100 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-27 00:59:26.856108 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-27 00:59:26.856116 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-27 00:59:26.856124 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-27 00:59:26.856132 | orchestrator | 2026-03-27 00:59:26.856166 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-27 00:59:26.856174 | orchestrator | Friday 27 March 2026 00:57:57 +0000 (0:00:04.488) 0:00:04.740 ********** 2026-03-27 00:59:26.856179 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-27 00:59:26.856184 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856189 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856194 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-27 00:59:26.856199 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856204 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-27 00:59:26.856208 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-27 00:59:26.856213 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-27 00:59:26.856218 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-27 00:59:26.856222 | orchestrator | 2026-03-27 00:59:26.856227 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-27 00:59:26.856232 | orchestrator | Friday 27 March 2026 00:58:01 +0000 (0:00:04.012) 0:00:08.752 ********** 2026-03-27 00:59:26.856237 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-27 00:59:26.856242 | orchestrator | 2026-03-27 00:59:26.856247 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-27 00:59:26.856265 | orchestrator | Friday 27 March 2026 00:58:02 +0000 (0:00:01.067) 0:00:09.820 ********** 2026-03-27 00:59:26.856273 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-27 00:59:26.856281 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856288 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856295 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-27 00:59:26.856302 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856310 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-27 00:59:26.856317 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-27 00:59:26.856325 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-27 00:59:26.856333 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-27 00:59:26.856352 | orchestrator | 2026-03-27 00:59:26.856357 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-27 00:59:26.856369 | orchestrator | Friday 27 March 2026 00:58:16 +0000 (0:00:13.884) 0:00:23.705 ********** 2026-03-27 00:59:26.856374 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-27 00:59:26.856379 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-27 00:59:26.856384 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-27 00:59:26.856388 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-27 00:59:26.856403 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-27 00:59:26.856408 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-27 00:59:26.856419 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-27 00:59:26.856428 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-27 00:59:26.856436 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-27 00:59:26.856447 | orchestrator | 2026-03-27 00:59:26.856458 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-27 00:59:26.856468 | orchestrator | Friday 27 March 2026 00:58:19 +0000 (0:00:03.128) 0:00:26.833 ********** 2026-03-27 00:59:26.856477 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-27 00:59:26.856486 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856494 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856502 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-27 00:59:26.856510 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-27 00:59:26.856517 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-27 00:59:26.856525 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-27 00:59:26.856532 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-27 00:59:26.856540 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-27 00:59:26.856547 | orchestrator | 2026-03-27 00:59:26.856557 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:59:26.856567 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 00:59:26.856576 | orchestrator | 2026-03-27 00:59:26.856584 | orchestrator | 2026-03-27 00:59:26.856592 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:59:26.856601 | orchestrator | Friday 27 March 2026 00:58:26 +0000 (0:00:06.917) 0:00:33.751 ********** 2026-03-27 00:59:26.856609 | orchestrator | =============================================================================== 2026-03-27 00:59:26.856618 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.88s 2026-03-27 00:59:26.856627 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.92s 2026-03-27 00:59:26.856636 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.49s 2026-03-27 00:59:26.856644 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.01s 2026-03-27 00:59:26.856653 | orchestrator | Check if target directories exist --------------------------------------- 3.13s 2026-03-27 00:59:26.856662 | orchestrator | Create share directory -------------------------------------------------- 1.07s 2026-03-27 00:59:26.856670 | orchestrator | 2026-03-27 00:59:26.856678 | orchestrator | 2026-03-27 00:59:26.856687 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-27 00:59:26.856695 | orchestrator | 2026-03-27 00:59:26.856704 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-27 00:59:26.856710 | orchestrator | Friday 27 March 2026 00:58:30 +0000 (0:00:00.332) 0:00:00.332 ********** 2026-03-27 00:59:26.856721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-27 00:59:26.856727 | orchestrator | 2026-03-27 00:59:26.856732 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-27 00:59:26.856737 | orchestrator | Friday 27 March 2026 00:58:30 +0000 (0:00:00.213) 0:00:00.546 ********** 2026-03-27 00:59:26.856742 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-27 00:59:26.856747 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-27 00:59:26.856752 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-27 00:59:26.856765 | orchestrator | 2026-03-27 00:59:26.856769 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-27 00:59:26.856774 | orchestrator | Friday 27 March 2026 00:58:32 +0000 (0:00:01.491) 0:00:02.038 ********** 2026-03-27 00:59:26.856779 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-27 00:59:26.856784 | orchestrator | 2026-03-27 00:59:26.856788 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-27 00:59:26.856793 | orchestrator | Friday 27 March 2026 00:58:33 +0000 (0:00:01.149) 0:00:03.188 ********** 2026-03-27 00:59:26.856798 | orchestrator | changed: [testbed-manager] 2026-03-27 00:59:26.856824 | orchestrator | 2026-03-27 00:59:26.856829 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-27 00:59:26.856834 | orchestrator | Friday 27 March 2026 00:58:34 +0000 (0:00:00.817) 0:00:04.006 ********** 2026-03-27 00:59:26.856839 | orchestrator | changed: [testbed-manager] 2026-03-27 00:59:26.856843 | orchestrator | 2026-03-27 00:59:26.856848 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-27 00:59:26.856853 | orchestrator | Friday 27 March 2026 00:58:35 +0000 (0:00:00.810) 0:00:04.816 ********** 2026-03-27 00:59:26.856858 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-27 00:59:26.856862 | orchestrator | ok: [testbed-manager] 2026-03-27 00:59:26.856867 | orchestrator | 2026-03-27 00:59:26.856872 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-27 00:59:26.856884 | orchestrator | Friday 27 March 2026 00:59:17 +0000 (0:00:41.986) 0:00:46.803 ********** 2026-03-27 00:59:26.856889 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-27 00:59:26.856894 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-27 00:59:26.856899 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-27 00:59:26.856903 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-27 00:59:26.856908 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-27 00:59:26.856913 | orchestrator | 2026-03-27 00:59:26.856918 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-27 00:59:26.856922 | orchestrator | Friday 27 March 2026 00:59:20 +0000 (0:00:03.845) 0:00:50.648 ********** 2026-03-27 00:59:26.856928 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-27 00:59:26.856936 | orchestrator | 2026-03-27 00:59:26.856943 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-27 00:59:26.856955 | orchestrator | Friday 27 March 2026 00:59:21 +0000 (0:00:00.524) 0:00:51.172 ********** 2026-03-27 00:59:26.856964 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:59:26.856972 | orchestrator | 2026-03-27 00:59:26.856980 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-27 00:59:26.856987 | orchestrator | Friday 27 March 2026 00:59:21 +0000 (0:00:00.132) 0:00:51.305 ********** 2026-03-27 00:59:26.856995 | orchestrator | skipping: [testbed-manager] 2026-03-27 00:59:26.857002 | orchestrator | 2026-03-27 00:59:26.857009 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-27 00:59:26.857016 | orchestrator | Friday 27 March 2026 00:59:21 +0000 (0:00:00.344) 0:00:51.649 ********** 2026-03-27 00:59:26.857023 | orchestrator | changed: [testbed-manager] 2026-03-27 00:59:26.857031 | orchestrator | 2026-03-27 00:59:26.857039 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-27 00:59:26.857046 | orchestrator | Friday 27 March 2026 00:59:23 +0000 (0:00:01.363) 0:00:53.013 ********** 2026-03-27 00:59:26.857053 | orchestrator | changed: [testbed-manager] 2026-03-27 00:59:26.857061 | orchestrator | 2026-03-27 00:59:26.857068 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-27 00:59:26.857076 | orchestrator | Friday 27 March 2026 00:59:24 +0000 (0:00:00.665) 0:00:53.678 ********** 2026-03-27 00:59:26.857084 | orchestrator | changed: [testbed-manager] 2026-03-27 00:59:26.857098 | orchestrator | 2026-03-27 00:59:26.857106 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-27 00:59:26.857114 | orchestrator | Friday 27 March 2026 00:59:24 +0000 (0:00:00.521) 0:00:54.200 ********** 2026-03-27 00:59:26.857122 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-27 00:59:26.857130 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-27 00:59:26.857137 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-27 00:59:26.857145 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-27 00:59:26.857152 | orchestrator | 2026-03-27 00:59:26.857160 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:59:26.857168 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 00:59:26.857176 | orchestrator | 2026-03-27 00:59:26.857183 | orchestrator | 2026-03-27 00:59:26.857191 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:59:26.857199 | orchestrator | Friday 27 March 2026 00:59:26 +0000 (0:00:01.565) 0:00:55.765 ********** 2026-03-27 00:59:26.857207 | orchestrator | =============================================================================== 2026-03-27 00:59:26.857215 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.99s 2026-03-27 00:59:26.857223 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.85s 2026-03-27 00:59:26.857235 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.57s 2026-03-27 00:59:26.857243 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.49s 2026-03-27 00:59:26.857252 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.36s 2026-03-27 00:59:26.857259 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2026-03-27 00:59:26.857266 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.82s 2026-03-27 00:59:26.857274 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.81s 2026-03-27 00:59:26.857282 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.67s 2026-03-27 00:59:26.857290 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.52s 2026-03-27 00:59:26.857297 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.52s 2026-03-27 00:59:26.857305 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.34s 2026-03-27 00:59:26.857313 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-03-27 00:59:26.857321 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-03-27 00:59:26.857509 | orchestrator | 2026-03-27 00:59:26 | INFO  | Task 3f8e1dfa-1450-4710-8320-5c80ca733600 is in state SUCCESS 2026-03-27 00:59:26.859240 | orchestrator | 2026-03-27 00:59:26.859302 | orchestrator | 2026-03-27 00:59:26.859312 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 00:59:26.859320 | orchestrator | 2026-03-27 00:59:26.859327 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 00:59:26.859334 | orchestrator | Friday 27 March 2026 00:56:54 +0000 (0:00:00.240) 0:00:00.240 ********** 2026-03-27 00:59:26.859343 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:59:26.859355 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:59:26.859372 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:59:26.859386 | orchestrator | 2026-03-27 00:59:26.859396 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 00:59:26.859408 | orchestrator | Friday 27 March 2026 00:56:54 +0000 (0:00:00.238) 0:00:00.478 ********** 2026-03-27 00:59:26.859420 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-27 00:59:26.859433 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-27 00:59:26.859440 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-27 00:59:26.859464 | orchestrator | 2026-03-27 00:59:26.859471 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-27 00:59:26.859481 | orchestrator | 2026-03-27 00:59:26.859492 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-27 00:59:26.859503 | orchestrator | Friday 27 March 2026 00:56:55 +0000 (0:00:00.264) 0:00:00.742 ********** 2026-03-27 00:59:26.859514 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:59:26.859525 | orchestrator | 2026-03-27 00:59:26.859537 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-27 00:59:26.859547 | orchestrator | Friday 27 March 2026 00:56:55 +0000 (0:00:00.578) 0:00:01.321 ********** 2026-03-27 00:59:26.859566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.859596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.859668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.859683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.859706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.859719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.859732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.859750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.859763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.859775 | orchestrator | 2026-03-27 00:59:26.859787 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-27 00:59:26.859799 | orchestrator | Friday 27 March 2026 00:56:57 +0000 (0:00:02.216) 0:00:03.537 ********** 2026-03-27 00:59:26.859836 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.859850 | orchestrator | 2026-03-27 00:59:26.859871 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-27 00:59:26.859891 | orchestrator | Friday 27 March 2026 00:56:58 +0000 (0:00:00.108) 0:00:03.646 ********** 2026-03-27 00:59:26.859904 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.859916 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.859927 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.859938 | orchestrator | 2026-03-27 00:59:26.859950 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-27 00:59:26.859962 | orchestrator | Friday 27 March 2026 00:56:58 +0000 (0:00:00.272) 0:00:03.918 ********** 2026-03-27 00:59:26.859973 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 00:59:26.859985 | orchestrator | 2026-03-27 00:59:26.859996 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-27 00:59:26.860007 | orchestrator | Friday 27 March 2026 00:56:59 +0000 (0:00:00.749) 0:00:04.668 ********** 2026-03-27 00:59:26.860019 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:59:26.860031 | orchestrator | 2026-03-27 00:59:26.860042 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-27 00:59:26.860053 | orchestrator | Friday 27 March 2026 00:56:59 +0000 (0:00:00.566) 0:00:05.234 ********** 2026-03-27 00:59:26.860065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.860079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.860098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.860130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860227 | orchestrator | 2026-03-27 00:59:26.860241 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-27 00:59:26.860253 | orchestrator | Friday 27 March 2026 00:57:02 +0000 (0:00:02.924) 0:00:08.159 ********** 2026-03-27 00:59:26.860274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:59:26.860286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.860299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:59:26.860311 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.860322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:59:26.860342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.860363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:59:26.860376 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.860397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:59:26.860411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.860423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:59:26.860435 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.860447 | orchestrator | 2026-03-27 00:59:26.860459 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-27 00:59:26.860471 | orchestrator | Friday 27 March 2026 00:57:03 +0000 (0:00:00.547) 0:00:08.707 ********** 2026-03-27 00:59:26.860487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:59:26.860502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.860515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:59:26.860523 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.860531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:59:26.860543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.860556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:59:26.860574 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.860592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:59:26.860613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.860626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:59:26.860638 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.860650 | orchestrator | 2026-03-27 00:59:26.860664 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-27 00:59:26.860673 | orchestrator | Friday 27 March 2026 00:57:04 +0000 (0:00:00.981) 0:00:09.688 ********** 2026-03-27 00:59:26.860681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.860694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.860715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.860724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860778 | orchestrator | 2026-03-27 00:59:26.860785 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-27 00:59:26.860792 | orchestrator | Friday 27 March 2026 00:57:06 +0000 (0:00:02.828) 0:00:12.517 ********** 2026-03-27 00:59:26.860831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.860841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.860849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.860868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.860882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.860890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.860899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.860928 | orchestrator | 2026-03-27 00:59:26.860935 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-27 00:59:26.860942 | orchestrator | Friday 27 March 2026 00:57:12 +0000 (0:00:05.446) 0:00:17.963 ********** 2026-03-27 00:59:26.860950 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:59:26.860957 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:59:26.860964 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:59:26.860971 | orchestrator | 2026-03-27 00:59:26.860978 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-27 00:59:26.860985 | orchestrator | Friday 27 March 2026 00:57:13 +0000 (0:00:01.533) 0:00:19.496 ********** 2026-03-27 00:59:26.860992 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.860999 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.861013 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.861021 | orchestrator | 2026-03-27 00:59:26.861028 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-27 00:59:26.861035 | orchestrator | Friday 27 March 2026 00:57:14 +0000 (0:00:01.047) 0:00:20.544 ********** 2026-03-27 00:59:26.861042 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.861048 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.861055 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.861062 | orchestrator | 2026-03-27 00:59:26.861070 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-27 00:59:26.861076 | orchestrator | Friday 27 March 2026 00:57:15 +0000 (0:00:00.326) 0:00:20.870 ********** 2026-03-27 00:59:26.861083 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.861090 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.861097 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.861104 | orchestrator | 2026-03-27 00:59:26.861110 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-27 00:59:26.861117 | orchestrator | Friday 27 March 2026 00:57:15 +0000 (0:00:00.368) 0:00:21.239 ********** 2026-03-27 00:59:26.861131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:59:26.861139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.861153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:59:26.861161 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.861169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:59:26.861180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.861192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:59:26.861199 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.861207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-27 00:59:26.861222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-27 00:59:26.861230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-27 00:59:26.861237 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.861244 | orchestrator | 2026-03-27 00:59:26.861251 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-27 00:59:26.861258 | orchestrator | Friday 27 March 2026 00:57:16 +0000 (0:00:00.527) 0:00:21.766 ********** 2026-03-27 00:59:26.861265 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.861272 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.861279 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.861286 | orchestrator | 2026-03-27 00:59:26.861293 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-27 00:59:26.861299 | orchestrator | Friday 27 March 2026 00:57:16 +0000 (0:00:00.404) 0:00:22.171 ********** 2026-03-27 00:59:26.861306 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-27 00:59:26.861318 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-27 00:59:26.861325 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-27 00:59:26.861332 | orchestrator | 2026-03-27 00:59:26.861341 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-27 00:59:26.861353 | orchestrator | Friday 27 March 2026 00:57:17 +0000 (0:00:01.408) 0:00:23.579 ********** 2026-03-27 00:59:26.861365 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 00:59:26.861376 | orchestrator | 2026-03-27 00:59:26.861387 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-27 00:59:26.861399 | orchestrator | Friday 27 March 2026 00:57:18 +0000 (0:00:00.857) 0:00:24.436 ********** 2026-03-27 00:59:26.861411 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.861423 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.861435 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.861445 | orchestrator | 2026-03-27 00:59:26.861452 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-27 00:59:26.861459 | orchestrator | Friday 27 March 2026 00:57:19 +0000 (0:00:00.468) 0:00:24.905 ********** 2026-03-27 00:59:26.861466 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-27 00:59:26.861473 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 00:59:26.861480 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-27 00:59:26.861487 | orchestrator | 2026-03-27 00:59:26.861500 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-27 00:59:26.861513 | orchestrator | Friday 27 March 2026 00:57:20 +0000 (0:00:01.167) 0:00:26.072 ********** 2026-03-27 00:59:26.861520 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:59:26.861528 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:59:26.861535 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:59:26.861542 | orchestrator | 2026-03-27 00:59:26.861549 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-27 00:59:26.861556 | orchestrator | Friday 27 March 2026 00:57:20 +0000 (0:00:00.424) 0:00:26.497 ********** 2026-03-27 00:59:26.861563 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-27 00:59:26.861569 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-27 00:59:26.861576 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-27 00:59:26.861583 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-27 00:59:26.861590 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-27 00:59:26.861597 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-27 00:59:26.861604 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-27 00:59:26.861611 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-27 00:59:26.861618 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-27 00:59:26.861625 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-27 00:59:26.861632 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-27 00:59:26.861639 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-27 00:59:26.861646 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-27 00:59:26.861653 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-27 00:59:26.861665 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-27 00:59:26.861677 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-27 00:59:26.861689 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-27 00:59:26.861700 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-27 00:59:26.861710 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-27 00:59:26.861722 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-27 00:59:26.861732 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-27 00:59:26.861742 | orchestrator | 2026-03-27 00:59:26.861754 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-27 00:59:26.861765 | orchestrator | Friday 27 March 2026 00:57:28 +0000 (0:00:07.666) 0:00:34.163 ********** 2026-03-27 00:59:26.861776 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-27 00:59:26.861788 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-27 00:59:26.861800 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-27 00:59:26.861901 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-27 00:59:26.861927 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-27 00:59:26.861941 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-27 00:59:26.861948 | orchestrator | 2026-03-27 00:59:26.861955 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-27 00:59:26.861961 | orchestrator | Friday 27 March 2026 00:57:30 +0000 (0:00:02.340) 0:00:36.504 ********** 2026-03-27 00:59:26.861978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.861987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.861995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-27 00:59:26.862071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.862094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.862102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-27 00:59:26.862116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.862124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.862131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-27 00:59:26.862138 | orchestrator | 2026-03-27 00:59:26.862146 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-27 00:59:26.862152 | orchestrator | Friday 27 March 2026 00:57:32 +0000 (0:00:02.066) 0:00:38.570 ********** 2026-03-27 00:59:26.862159 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.862167 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.862174 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.862181 | orchestrator | 2026-03-27 00:59:26.862187 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-27 00:59:26.862258 | orchestrator | Friday 27 March 2026 00:57:33 +0000 (0:00:00.455) 0:00:39.025 ********** 2026-03-27 00:59:26.862268 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:59:26.862282 | orchestrator | 2026-03-27 00:59:26.862289 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-27 00:59:26.862296 | orchestrator | Friday 27 March 2026 00:57:35 +0000 (0:00:02.015) 0:00:41.041 ********** 2026-03-27 00:59:26.862304 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:59:26.862310 | orchestrator | 2026-03-27 00:59:26.862317 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-27 00:59:26.862324 | orchestrator | Friday 27 March 2026 00:57:37 +0000 (0:00:02.139) 0:00:43.180 ********** 2026-03-27 00:59:26.862331 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:59:26.862340 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:59:26.862351 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:59:26.862363 | orchestrator | 2026-03-27 00:59:26.862373 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-27 00:59:26.862384 | orchestrator | Friday 27 March 2026 00:57:38 +0000 (0:00:00.747) 0:00:43.927 ********** 2026-03-27 00:59:26.862396 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:59:26.862407 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:59:26.862416 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:59:26.862426 | orchestrator | 2026-03-27 00:59:26.862444 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-27 00:59:26.862455 | orchestrator | Friday 27 March 2026 00:57:38 +0000 (0:00:00.370) 0:00:44.298 ********** 2026-03-27 00:59:26.862466 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.862478 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.862489 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.862502 | orchestrator | 2026-03-27 00:59:26.862509 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-27 00:59:26.862516 | orchestrator | Friday 27 March 2026 00:57:39 +0000 (0:00:00.371) 0:00:44.670 ********** 2026-03-27 00:59:26.862523 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:59:26.862529 | orchestrator | 2026-03-27 00:59:26.862536 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-27 00:59:26.862543 | orchestrator | Friday 27 March 2026 00:57:52 +0000 (0:00:13.904) 0:00:58.575 ********** 2026-03-27 00:59:26.862549 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:59:26.862556 | orchestrator | 2026-03-27 00:59:26.862563 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-27 00:59:26.862570 | orchestrator | Friday 27 March 2026 00:58:04 +0000 (0:00:11.168) 0:01:09.743 ********** 2026-03-27 00:59:26.862577 | orchestrator | 2026-03-27 00:59:26.862583 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-27 00:59:26.862590 | orchestrator | Friday 27 March 2026 00:58:04 +0000 (0:00:00.068) 0:01:09.812 ********** 2026-03-27 00:59:26.862597 | orchestrator | 2026-03-27 00:59:26.862604 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-27 00:59:26.862618 | orchestrator | Friday 27 March 2026 00:58:04 +0000 (0:00:00.068) 0:01:09.881 ********** 2026-03-27 00:59:26.862625 | orchestrator | 2026-03-27 00:59:26.862632 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-27 00:59:26.862639 | orchestrator | Friday 27 March 2026 00:58:04 +0000 (0:00:00.068) 0:01:09.950 ********** 2026-03-27 00:59:26.862646 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:59:26.862653 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:59:26.862660 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:59:26.862666 | orchestrator | 2026-03-27 00:59:26.862673 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-27 00:59:26.862680 | orchestrator | Friday 27 March 2026 00:58:13 +0000 (0:00:09.315) 0:01:19.265 ********** 2026-03-27 00:59:26.862687 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:59:26.862694 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:59:26.862701 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:59:26.862708 | orchestrator | 2026-03-27 00:59:26.862714 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-27 00:59:26.862728 | orchestrator | Friday 27 March 2026 00:58:22 +0000 (0:00:09.093) 0:01:28.358 ********** 2026-03-27 00:59:26.862735 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:59:26.862742 | orchestrator | changed: [testbed-node-2] 2026-03-27 00:59:26.862748 | orchestrator | changed: [testbed-node-1] 2026-03-27 00:59:26.862755 | orchestrator | 2026-03-27 00:59:26.862762 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-27 00:59:26.862769 | orchestrator | Friday 27 March 2026 00:58:28 +0000 (0:00:05.884) 0:01:34.243 ********** 2026-03-27 00:59:26.862775 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 00:59:26.862782 | orchestrator | 2026-03-27 00:59:26.862789 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-27 00:59:26.862796 | orchestrator | Friday 27 March 2026 00:58:29 +0000 (0:00:00.523) 0:01:34.766 ********** 2026-03-27 00:59:26.862821 | orchestrator | ok: [testbed-node-1] 2026-03-27 00:59:26.862830 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:59:26.862837 | orchestrator | ok: [testbed-node-2] 2026-03-27 00:59:26.862844 | orchestrator | 2026-03-27 00:59:26.862851 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-27 00:59:26.862857 | orchestrator | Friday 27 March 2026 00:58:29 +0000 (0:00:00.796) 0:01:35.562 ********** 2026-03-27 00:59:26.862864 | orchestrator | changed: [testbed-node-0] 2026-03-27 00:59:26.862871 | orchestrator | 2026-03-27 00:59:26.862878 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-27 00:59:26.862885 | orchestrator | Friday 27 March 2026 00:58:31 +0000 (0:00:01.786) 0:01:37.349 ********** 2026-03-27 00:59:26.862892 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-27 00:59:26.862898 | orchestrator | 2026-03-27 00:59:26.862905 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-27 00:59:26.862912 | orchestrator | Friday 27 March 2026 00:58:44 +0000 (0:00:12.484) 0:01:49.833 ********** 2026-03-27 00:59:26.862923 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-27 00:59:26.862935 | orchestrator | 2026-03-27 00:59:26.862946 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-27 00:59:26.862957 | orchestrator | Friday 27 March 2026 00:59:12 +0000 (0:00:28.427) 0:02:18.260 ********** 2026-03-27 00:59:26.862968 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-27 00:59:26.862979 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-27 00:59:26.862989 | orchestrator | 2026-03-27 00:59:26.863000 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-27 00:59:26.863012 | orchestrator | Friday 27 March 2026 00:59:20 +0000 (0:00:07.771) 0:02:26.032 ********** 2026-03-27 00:59:26.863023 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.863035 | orchestrator | 2026-03-27 00:59:26.863046 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-27 00:59:26.863059 | orchestrator | Friday 27 March 2026 00:59:20 +0000 (0:00:00.124) 0:02:26.157 ********** 2026-03-27 00:59:26.863070 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.863081 | orchestrator | 2026-03-27 00:59:26.863092 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-27 00:59:26.863104 | orchestrator | Friday 27 March 2026 00:59:20 +0000 (0:00:00.093) 0:02:26.250 ********** 2026-03-27 00:59:26.863116 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.863128 | orchestrator | 2026-03-27 00:59:26.863140 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-27 00:59:26.863148 | orchestrator | Friday 27 March 2026 00:59:20 +0000 (0:00:00.130) 0:02:26.381 ********** 2026-03-27 00:59:26.863154 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.863161 | orchestrator | 2026-03-27 00:59:26.863168 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-27 00:59:26.863175 | orchestrator | Friday 27 March 2026 00:59:21 +0000 (0:00:00.298) 0:02:26.679 ********** 2026-03-27 00:59:26.863189 | orchestrator | ok: [testbed-node-0] 2026-03-27 00:59:26.863196 | orchestrator | 2026-03-27 00:59:26.863203 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-27 00:59:26.863210 | orchestrator | Friday 27 March 2026 00:59:24 +0000 (0:00:03.580) 0:02:30.260 ********** 2026-03-27 00:59:26.863217 | orchestrator | skipping: [testbed-node-0] 2026-03-27 00:59:26.863224 | orchestrator | skipping: [testbed-node-1] 2026-03-27 00:59:26.863231 | orchestrator | skipping: [testbed-node-2] 2026-03-27 00:59:26.863237 | orchestrator | 2026-03-27 00:59:26.863244 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 00:59:26.863252 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-27 00:59:26.863265 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-27 00:59:26.863273 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-27 00:59:26.863279 | orchestrator | 2026-03-27 00:59:26.863286 | orchestrator | 2026-03-27 00:59:26.863293 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 00:59:26.863299 | orchestrator | Friday 27 March 2026 00:59:25 +0000 (0:00:00.529) 0:02:30.789 ********** 2026-03-27 00:59:26.863399 | orchestrator | =============================================================================== 2026-03-27 00:59:26.863426 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.43s 2026-03-27 00:59:26.863433 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.90s 2026-03-27 00:59:26.863439 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.48s 2026-03-27 00:59:26.863446 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.17s 2026-03-27 00:59:26.863456 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.32s 2026-03-27 00:59:26.863467 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.09s 2026-03-27 00:59:26.863478 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.77s 2026-03-27 00:59:26.863487 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 7.67s 2026-03-27 00:59:26.863497 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.88s 2026-03-27 00:59:26.863507 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.45s 2026-03-27 00:59:26.863518 | orchestrator | keystone : Creating default user role ----------------------------------- 3.58s 2026-03-27 00:59:26.863529 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.92s 2026-03-27 00:59:26.863539 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.83s 2026-03-27 00:59:26.863549 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.34s 2026-03-27 00:59:26.863559 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.22s 2026-03-27 00:59:26.863570 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.14s 2026-03-27 00:59:26.863581 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.07s 2026-03-27 00:59:26.863590 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.02s 2026-03-27 00:59:26.863601 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.79s 2026-03-27 00:59:26.863611 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.53s 2026-03-27 00:59:26.863622 | orchestrator | 2026-03-27 00:59:26 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:26.863632 | orchestrator | 2026-03-27 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:29.886702 | orchestrator | 2026-03-27 00:59:29 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:29.887005 | orchestrator | 2026-03-27 00:59:29 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:29.887730 | orchestrator | 2026-03-27 00:59:29 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:29.888646 | orchestrator | 2026-03-27 00:59:29 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:29.889023 | orchestrator | 2026-03-27 00:59:29 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:29.889149 | orchestrator | 2026-03-27 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:32.917359 | orchestrator | 2026-03-27 00:59:32 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:32.917449 | orchestrator | 2026-03-27 00:59:32 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:32.917758 | orchestrator | 2026-03-27 00:59:32 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:32.918697 | orchestrator | 2026-03-27 00:59:32 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:32.920063 | orchestrator | 2026-03-27 00:59:32 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:32.920098 | orchestrator | 2026-03-27 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:35.963449 | orchestrator | 2026-03-27 00:59:35 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:35.963879 | orchestrator | 2026-03-27 00:59:35 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:35.964859 | orchestrator | 2026-03-27 00:59:35 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:35.965887 | orchestrator | 2026-03-27 00:59:35 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:35.967135 | orchestrator | 2026-03-27 00:59:35 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:35.967171 | orchestrator | 2026-03-27 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:39.011933 | orchestrator | 2026-03-27 00:59:39 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:39.014382 | orchestrator | 2026-03-27 00:59:39 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:39.016668 | orchestrator | 2026-03-27 00:59:39 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:39.018248 | orchestrator | 2026-03-27 00:59:39 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:39.019771 | orchestrator | 2026-03-27 00:59:39 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:39.019852 | orchestrator | 2026-03-27 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:42.069673 | orchestrator | 2026-03-27 00:59:42 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:42.071920 | orchestrator | 2026-03-27 00:59:42 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:42.075619 | orchestrator | 2026-03-27 00:59:42 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:42.078198 | orchestrator | 2026-03-27 00:59:42 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:42.080688 | orchestrator | 2026-03-27 00:59:42 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:42.080964 | orchestrator | 2026-03-27 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:45.122338 | orchestrator | 2026-03-27 00:59:45 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:45.123593 | orchestrator | 2026-03-27 00:59:45 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:45.125069 | orchestrator | 2026-03-27 00:59:45 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:45.126678 | orchestrator | 2026-03-27 00:59:45 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:45.127691 | orchestrator | 2026-03-27 00:59:45 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:45.127724 | orchestrator | 2026-03-27 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:48.165635 | orchestrator | 2026-03-27 00:59:48 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:48.167052 | orchestrator | 2026-03-27 00:59:48 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:48.168465 | orchestrator | 2026-03-27 00:59:48 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:48.170047 | orchestrator | 2026-03-27 00:59:48 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:48.171967 | orchestrator | 2026-03-27 00:59:48 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:48.172026 | orchestrator | 2026-03-27 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:51.217601 | orchestrator | 2026-03-27 00:59:51 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:51.219399 | orchestrator | 2026-03-27 00:59:51 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:51.222104 | orchestrator | 2026-03-27 00:59:51 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:51.224600 | orchestrator | 2026-03-27 00:59:51 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:51.227100 | orchestrator | 2026-03-27 00:59:51 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:51.227155 | orchestrator | 2026-03-27 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:54.275273 | orchestrator | 2026-03-27 00:59:54 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:54.276564 | orchestrator | 2026-03-27 00:59:54 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:54.278204 | orchestrator | 2026-03-27 00:59:54 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:54.279607 | orchestrator | 2026-03-27 00:59:54 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:54.281059 | orchestrator | 2026-03-27 00:59:54 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:54.281165 | orchestrator | 2026-03-27 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-27 00:59:57.324084 | orchestrator | 2026-03-27 00:59:57 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 00:59:57.325096 | orchestrator | 2026-03-27 00:59:57 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 00:59:57.326900 | orchestrator | 2026-03-27 00:59:57 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 00:59:57.328525 | orchestrator | 2026-03-27 00:59:57 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 00:59:57.330432 | orchestrator | 2026-03-27 00:59:57 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 00:59:57.330476 | orchestrator | 2026-03-27 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:00.384122 | orchestrator | 2026-03-27 01:00:00 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:00.386412 | orchestrator | 2026-03-27 01:00:00 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:00.388524 | orchestrator | 2026-03-27 01:00:00 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:00.390901 | orchestrator | 2026-03-27 01:00:00 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 01:00:00.392709 | orchestrator | 2026-03-27 01:00:00 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:00.392755 | orchestrator | 2026-03-27 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:03.439986 | orchestrator | 2026-03-27 01:00:03 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:03.441980 | orchestrator | 2026-03-27 01:00:03 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:03.443890 | orchestrator | 2026-03-27 01:00:03 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:03.445408 | orchestrator | 2026-03-27 01:00:03 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 01:00:03.447109 | orchestrator | 2026-03-27 01:00:03 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:03.447310 | orchestrator | 2026-03-27 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:06.498354 | orchestrator | 2026-03-27 01:00:06 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:06.499955 | orchestrator | 2026-03-27 01:00:06 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:06.500882 | orchestrator | 2026-03-27 01:00:06 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:06.502130 | orchestrator | 2026-03-27 01:00:06 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 01:00:06.503483 | orchestrator | 2026-03-27 01:00:06 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:06.503551 | orchestrator | 2026-03-27 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:09.543805 | orchestrator | 2026-03-27 01:00:09 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:09.544053 | orchestrator | 2026-03-27 01:00:09 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:09.546665 | orchestrator | 2026-03-27 01:00:09 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:09.547536 | orchestrator | 2026-03-27 01:00:09 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 01:00:09.548567 | orchestrator | 2026-03-27 01:00:09 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:09.548906 | orchestrator | 2026-03-27 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:12.587827 | orchestrator | 2026-03-27 01:00:12 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:12.590044 | orchestrator | 2026-03-27 01:00:12 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:12.590959 | orchestrator | 2026-03-27 01:00:12 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:12.591965 | orchestrator | 2026-03-27 01:00:12 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 01:00:12.594126 | orchestrator | 2026-03-27 01:00:12 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:12.594161 | orchestrator | 2026-03-27 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:15.637962 | orchestrator | 2026-03-27 01:00:15 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:15.639130 | orchestrator | 2026-03-27 01:00:15 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:15.639183 | orchestrator | 2026-03-27 01:00:15 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:15.639954 | orchestrator | 2026-03-27 01:00:15 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 01:00:15.640663 | orchestrator | 2026-03-27 01:00:15 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:15.640685 | orchestrator | 2026-03-27 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:18.668037 | orchestrator | 2026-03-27 01:00:18 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:18.668834 | orchestrator | 2026-03-27 01:00:18 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:18.669730 | orchestrator | 2026-03-27 01:00:18 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:18.670581 | orchestrator | 2026-03-27 01:00:18 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state STARTED 2026-03-27 01:00:18.672808 | orchestrator | 2026-03-27 01:00:18 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:18.672849 | orchestrator | 2026-03-27 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:21.696583 | orchestrator | 2026-03-27 01:00:21 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:21.696680 | orchestrator | 2026-03-27 01:00:21 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:21.697597 | orchestrator | 2026-03-27 01:00:21 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:21.699003 | orchestrator | 2026-03-27 01:00:21 | INFO  | Task 74708336-1126-4bd4-851f-920fad7d5c35 is in state SUCCESS 2026-03-27 01:00:21.699057 | orchestrator | 2026-03-27 01:00:21 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:21.699244 | orchestrator | 2026-03-27 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:24.725489 | orchestrator | 2026-03-27 01:00:24 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:24.726049 | orchestrator | 2026-03-27 01:00:24 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:24.726897 | orchestrator | 2026-03-27 01:00:24 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:24.727507 | orchestrator | 2026-03-27 01:00:24 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:24.727568 | orchestrator | 2026-03-27 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:27.758449 | orchestrator | 2026-03-27 01:00:27 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:27.758966 | orchestrator | 2026-03-27 01:00:27 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:27.759623 | orchestrator | 2026-03-27 01:00:27 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:27.760561 | orchestrator | 2026-03-27 01:00:27 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:27.760592 | orchestrator | 2026-03-27 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:30.797564 | orchestrator | 2026-03-27 01:00:30 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:30.799134 | orchestrator | 2026-03-27 01:00:30 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:30.802253 | orchestrator | 2026-03-27 01:00:30 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:30.804083 | orchestrator | 2026-03-27 01:00:30 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:30.804122 | orchestrator | 2026-03-27 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:33.837826 | orchestrator | 2026-03-27 01:00:33 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:33.838750 | orchestrator | 2026-03-27 01:00:33 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:33.840544 | orchestrator | 2026-03-27 01:00:33 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:33.842233 | orchestrator | 2026-03-27 01:00:33 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:33.842583 | orchestrator | 2026-03-27 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:36.875682 | orchestrator | 2026-03-27 01:00:36 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:36.875975 | orchestrator | 2026-03-27 01:00:36 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:36.876828 | orchestrator | 2026-03-27 01:00:36 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:36.877494 | orchestrator | 2026-03-27 01:00:36 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:36.877521 | orchestrator | 2026-03-27 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:39.908065 | orchestrator | 2026-03-27 01:00:39 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:39.910546 | orchestrator | 2026-03-27 01:00:39 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:39.911197 | orchestrator | 2026-03-27 01:00:39 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:39.912068 | orchestrator | 2026-03-27 01:00:39 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:39.912965 | orchestrator | 2026-03-27 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:42.943261 | orchestrator | 2026-03-27 01:00:42 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:42.943604 | orchestrator | 2026-03-27 01:00:42 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:42.944325 | orchestrator | 2026-03-27 01:00:42 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:42.945383 | orchestrator | 2026-03-27 01:00:42 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:42.945429 | orchestrator | 2026-03-27 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:45.977294 | orchestrator | 2026-03-27 01:00:45 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:45.977498 | orchestrator | 2026-03-27 01:00:45 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:45.978334 | orchestrator | 2026-03-27 01:00:45 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:45.978908 | orchestrator | 2026-03-27 01:00:45 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:45.978929 | orchestrator | 2026-03-27 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:49.017227 | orchestrator | 2026-03-27 01:00:49 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:49.017691 | orchestrator | 2026-03-27 01:00:49 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:49.018582 | orchestrator | 2026-03-27 01:00:49 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:49.019016 | orchestrator | 2026-03-27 01:00:49 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:49.019037 | orchestrator | 2026-03-27 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:52.051069 | orchestrator | 2026-03-27 01:00:52 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:52.051651 | orchestrator | 2026-03-27 01:00:52 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:52.052225 | orchestrator | 2026-03-27 01:00:52 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:52.052906 | orchestrator | 2026-03-27 01:00:52 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:52.052925 | orchestrator | 2026-03-27 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:55.087145 | orchestrator | 2026-03-27 01:00:55 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:55.088395 | orchestrator | 2026-03-27 01:00:55 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:55.090170 | orchestrator | 2026-03-27 01:00:55 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:55.091285 | orchestrator | 2026-03-27 01:00:55 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:55.091315 | orchestrator | 2026-03-27 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:00:58.130204 | orchestrator | 2026-03-27 01:00:58 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:00:58.130499 | orchestrator | 2026-03-27 01:00:58 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:00:58.131133 | orchestrator | 2026-03-27 01:00:58 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:00:58.131887 | orchestrator | 2026-03-27 01:00:58 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:00:58.131911 | orchestrator | 2026-03-27 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:01.159708 | orchestrator | 2026-03-27 01:01:01 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:01.160940 | orchestrator | 2026-03-27 01:01:01 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:01:01.161632 | orchestrator | 2026-03-27 01:01:01 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:01.162934 | orchestrator | 2026-03-27 01:01:01 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:01.162972 | orchestrator | 2026-03-27 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:04.204242 | orchestrator | 2026-03-27 01:01:04 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:04.204327 | orchestrator | 2026-03-27 01:01:04 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:01:04.204336 | orchestrator | 2026-03-27 01:01:04 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:04.204342 | orchestrator | 2026-03-27 01:01:04 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:04.204349 | orchestrator | 2026-03-27 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:07.229151 | orchestrator | 2026-03-27 01:01:07 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:07.229879 | orchestrator | 2026-03-27 01:01:07 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:01:07.230627 | orchestrator | 2026-03-27 01:01:07 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:07.231546 | orchestrator | 2026-03-27 01:01:07 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:07.231654 | orchestrator | 2026-03-27 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:10.256593 | orchestrator | 2026-03-27 01:01:10 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:10.257658 | orchestrator | 2026-03-27 01:01:10 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state STARTED 2026-03-27 01:01:10.260649 | orchestrator | 2026-03-27 01:01:10 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:10.261822 | orchestrator | 2026-03-27 01:01:10 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:10.262218 | orchestrator | 2026-03-27 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:13.284288 | orchestrator | 2026-03-27 01:01:13 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:13.284473 | orchestrator | 2026-03-27 01:01:13 | INFO  | Task d7fd2565-b9ab-44ca-91bd-a2136642f51c is in state SUCCESS 2026-03-27 01:01:13.285161 | orchestrator | 2026-03-27 01:01:13 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:13.285859 | orchestrator | 2026-03-27 01:01:13 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:13.285877 | orchestrator | 2026-03-27 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:16.323700 | orchestrator | 2026-03-27 01:01:16 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:16.327056 | orchestrator | 2026-03-27 01:01:16 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:16.329208 | orchestrator | 2026-03-27 01:01:16 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:16.331372 | orchestrator | 2026-03-27 01:01:16 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:16.331676 | orchestrator | 2026-03-27 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:19.359485 | orchestrator | 2026-03-27 01:01:19 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:19.360225 | orchestrator | 2026-03-27 01:01:19 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:19.360913 | orchestrator | 2026-03-27 01:01:19 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:19.361415 | orchestrator | 2026-03-27 01:01:19 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:19.361895 | orchestrator | 2026-03-27 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:22.391285 | orchestrator | 2026-03-27 01:01:22 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:22.393829 | orchestrator | 2026-03-27 01:01:22 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:22.395337 | orchestrator | 2026-03-27 01:01:22 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:22.397709 | orchestrator | 2026-03-27 01:01:22 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:22.397772 | orchestrator | 2026-03-27 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:25.424383 | orchestrator | 2026-03-27 01:01:25 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:25.424756 | orchestrator | 2026-03-27 01:01:25 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:25.425519 | orchestrator | 2026-03-27 01:01:25 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:25.426822 | orchestrator | 2026-03-27 01:01:25 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:25.426846 | orchestrator | 2026-03-27 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:28.457702 | orchestrator | 2026-03-27 01:01:28 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:28.458132 | orchestrator | 2026-03-27 01:01:28 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state STARTED 2026-03-27 01:01:28.458873 | orchestrator | 2026-03-27 01:01:28 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:28.459612 | orchestrator | 2026-03-27 01:01:28 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:28.459762 | orchestrator | 2026-03-27 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:31.509948 | orchestrator | 2026-03-27 01:01:31 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:31.511346 | orchestrator | 2026-03-27 01:01:31 | INFO  | Task d1a99260-a12b-4127-9c44-112f4e710bc4 is in state SUCCESS 2026-03-27 01:01:31.512332 | orchestrator | 2026-03-27 01:01:31.512369 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-27 01:01:31.512396 | orchestrator | 2.16.14 2026-03-27 01:01:31.512401 | orchestrator | 2026-03-27 01:01:31.512405 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-27 01:01:31.512411 | orchestrator | 2026-03-27 01:01:31.512418 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-27 01:01:31.512424 | orchestrator | Friday 27 March 2026 00:59:30 +0000 (0:00:00.193) 0:00:00.193 ********** 2026-03-27 01:01:31.512435 | orchestrator | changed: [testbed-manager] 2026-03-27 01:01:31.512442 | orchestrator | 2026-03-27 01:01:31.512447 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-27 01:01:31.512454 | orchestrator | Friday 27 March 2026 00:59:31 +0000 (0:00:01.744) 0:00:01.938 ********** 2026-03-27 01:01:31.512474 | orchestrator | changed: [testbed-manager] 2026-03-27 01:01:31.512481 | orchestrator | 2026-03-27 01:01:31.512487 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-27 01:01:31.512493 | orchestrator | Friday 27 March 2026 00:59:32 +0000 (0:00:00.935) 0:00:02.874 ********** 2026-03-27 01:01:31.512500 | orchestrator | changed: [testbed-manager] 2026-03-27 01:01:31.512507 | orchestrator | 2026-03-27 01:01:31.512514 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-27 01:01:31.512518 | orchestrator | Friday 27 March 2026 00:59:33 +0000 (0:00:00.967) 0:00:03.842 ********** 2026-03-27 01:01:31.512522 | orchestrator | changed: [testbed-manager] 2026-03-27 01:01:31.512526 | orchestrator | 2026-03-27 01:01:31.512530 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-27 01:01:31.512533 | orchestrator | Friday 27 March 2026 00:59:34 +0000 (0:00:01.102) 0:00:04.944 ********** 2026-03-27 01:01:31.512537 | orchestrator | changed: [testbed-manager] 2026-03-27 01:01:31.512541 | orchestrator | 2026-03-27 01:01:31.512545 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-27 01:01:31.512548 | orchestrator | Friday 27 March 2026 00:59:35 +0000 (0:00:00.934) 0:00:05.879 ********** 2026-03-27 01:01:31.512552 | orchestrator | changed: [testbed-manager] 2026-03-27 01:01:31.512556 | orchestrator | 2026-03-27 01:01:31.512559 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-27 01:01:31.512563 | orchestrator | Friday 27 March 2026 00:59:36 +0000 (0:00:00.986) 0:00:06.866 ********** 2026-03-27 01:01:31.512567 | orchestrator | changed: [testbed-manager] 2026-03-27 01:01:31.512571 | orchestrator | 2026-03-27 01:01:31.512574 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-27 01:01:31.512578 | orchestrator | Friday 27 March 2026 00:59:37 +0000 (0:00:01.061) 0:00:07.927 ********** 2026-03-27 01:01:31.512582 | orchestrator | changed: [testbed-manager] 2026-03-27 01:01:31.512585 | orchestrator | 2026-03-27 01:01:31.512589 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-27 01:01:31.512593 | orchestrator | Friday 27 March 2026 00:59:38 +0000 (0:00:01.114) 0:00:09.041 ********** 2026-03-27 01:01:31.512596 | orchestrator | changed: [testbed-manager] 2026-03-27 01:01:31.512600 | orchestrator | 2026-03-27 01:01:31.512604 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-27 01:01:31.512607 | orchestrator | Friday 27 March 2026 00:59:54 +0000 (0:00:15.564) 0:00:24.606 ********** 2026-03-27 01:01:31.512611 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:01:31.512615 | orchestrator | 2026-03-27 01:01:31.512618 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-27 01:01:31.512622 | orchestrator | 2026-03-27 01:01:31.512626 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-27 01:01:31.512630 | orchestrator | Friday 27 March 2026 00:59:54 +0000 (0:00:00.142) 0:00:24.749 ********** 2026-03-27 01:01:31.512634 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:01:31.512637 | orchestrator | 2026-03-27 01:01:31.512641 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-27 01:01:31.512645 | orchestrator | 2026-03-27 01:01:31.512648 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-27 01:01:31.512652 | orchestrator | Friday 27 March 2026 01:00:06 +0000 (0:00:11.798) 0:00:36.548 ********** 2026-03-27 01:01:31.512656 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:01:31.512660 | orchestrator | 2026-03-27 01:01:31.512663 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-27 01:01:31.512667 | orchestrator | 2026-03-27 01:01:31.512671 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-27 01:01:31.512674 | orchestrator | Friday 27 March 2026 01:00:07 +0000 (0:00:01.337) 0:00:37.885 ********** 2026-03-27 01:01:31.512678 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:01:31.512682 | orchestrator | 2026-03-27 01:01:31.512686 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:01:31.512694 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-27 01:01:31.512699 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:01:31.512703 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:01:31.512707 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:01:31.512710 | orchestrator | 2026-03-27 01:01:31.512714 | orchestrator | 2026-03-27 01:01:31.512718 | orchestrator | 2026-03-27 01:01:31.512722 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:01:31.512754 | orchestrator | Friday 27 March 2026 01:00:19 +0000 (0:00:11.530) 0:00:49.416 ********** 2026-03-27 01:01:31.512759 | orchestrator | =============================================================================== 2026-03-27 01:01:31.512769 | orchestrator | Restart ceph manager service ------------------------------------------- 24.67s 2026-03-27 01:01:31.512782 | orchestrator | Create admin user ------------------------------------------------------ 15.57s 2026-03-27 01:01:31.512786 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.74s 2026-03-27 01:01:31.512789 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.11s 2026-03-27 01:01:31.512793 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.10s 2026-03-27 01:01:31.512822 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.06s 2026-03-27 01:01:31.512827 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.99s 2026-03-27 01:01:31.512831 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.97s 2026-03-27 01:01:31.512835 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.94s 2026-03-27 01:01:31.512839 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.93s 2026-03-27 01:01:31.512842 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-03-27 01:01:31.512846 | orchestrator | 2026-03-27 01:01:31.512850 | orchestrator | 2026-03-27 01:01:31.512854 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-27 01:01:31.512857 | orchestrator | 2026-03-27 01:01:31.512861 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-27 01:01:31.512865 | orchestrator | Friday 27 March 2026 00:59:29 +0000 (0:00:00.095) 0:00:00.095 ********** 2026-03-27 01:01:31.512869 | orchestrator | changed: [localhost] 2026-03-27 01:01:31.512873 | orchestrator | 2026-03-27 01:01:31.512876 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-27 01:01:31.512880 | orchestrator | Friday 27 March 2026 00:59:30 +0000 (0:00:00.997) 0:00:01.092 ********** 2026-03-27 01:01:31.512884 | orchestrator | changed: [localhost] 2026-03-27 01:01:31.512888 | orchestrator | 2026-03-27 01:01:31.512910 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-27 01:01:31.512915 | orchestrator | Friday 27 March 2026 01:00:21 +0000 (0:00:51.069) 0:00:52.162 ********** 2026-03-27 01:01:31.512919 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-27 01:01:31.512922 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-27 01:01:31.512926 | orchestrator | changed: [localhost] 2026-03-27 01:01:31.512930 | orchestrator | 2026-03-27 01:01:31.512934 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:01:31.512938 | orchestrator | 2026-03-27 01:01:31.512941 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:01:31.512946 | orchestrator | Friday 27 March 2026 01:01:11 +0000 (0:00:50.602) 0:01:42.765 ********** 2026-03-27 01:01:31.512954 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:01:31.512959 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:01:31.512963 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:01:31.512967 | orchestrator | 2026-03-27 01:01:31.512972 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:01:31.512976 | orchestrator | Friday 27 March 2026 01:01:12 +0000 (0:00:00.265) 0:01:43.030 ********** 2026-03-27 01:01:31.512981 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-27 01:01:31.512986 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-27 01:01:31.512990 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-27 01:01:31.512995 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-27 01:01:31.512999 | orchestrator | 2026-03-27 01:01:31.513003 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-27 01:01:31.513008 | orchestrator | skipping: no hosts matched 2026-03-27 01:01:31.513013 | orchestrator | 2026-03-27 01:01:31.513017 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:01:31.513021 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:01:31.513026 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:01:31.513031 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:01:31.513035 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:01:31.513040 | orchestrator | 2026-03-27 01:01:31.513045 | orchestrator | 2026-03-27 01:01:31.513049 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:01:31.513053 | orchestrator | Friday 27 March 2026 01:01:12 +0000 (0:00:00.533) 0:01:43.564 ********** 2026-03-27 01:01:31.513057 | orchestrator | =============================================================================== 2026-03-27 01:01:31.513062 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 51.07s 2026-03-27 01:01:31.513067 | orchestrator | Download ironic-agent kernel ------------------------------------------- 50.60s 2026-03-27 01:01:31.513071 | orchestrator | Ensure the destination directory exists --------------------------------- 1.00s 2026-03-27 01:01:31.513075 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-03-27 01:01:31.513080 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-03-27 01:01:31.513085 | orchestrator | 2026-03-27 01:01:31.513089 | orchestrator | 2026-03-27 01:01:31.513093 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:01:31.513098 | orchestrator | 2026-03-27 01:01:31.513102 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:01:31.513108 | orchestrator | Friday 27 March 2026 00:59:29 +0000 (0:00:00.451) 0:00:00.451 ********** 2026-03-27 01:01:31.513113 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:01:31.513121 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:01:31.513126 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:01:31.513130 | orchestrator | 2026-03-27 01:01:31.513134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:01:31.513138 | orchestrator | Friday 27 March 2026 00:59:30 +0000 (0:00:00.454) 0:00:00.906 ********** 2026-03-27 01:01:31.513143 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-27 01:01:31.513148 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-27 01:01:31.513154 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-27 01:01:31.513160 | orchestrator | 2026-03-27 01:01:31.513164 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-27 01:01:31.513169 | orchestrator | 2026-03-27 01:01:31.513176 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-27 01:01:31.513181 | orchestrator | Friday 27 March 2026 00:59:30 +0000 (0:00:00.337) 0:00:01.243 ********** 2026-03-27 01:01:31.513185 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:01:31.513190 | orchestrator | 2026-03-27 01:01:31.513194 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-27 01:01:31.513198 | orchestrator | Friday 27 March 2026 00:59:31 +0000 (0:00:00.637) 0:00:01.881 ********** 2026-03-27 01:01:31.513203 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-27 01:01:31.513207 | orchestrator | 2026-03-27 01:01:31.513212 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-27 01:01:31.513216 | orchestrator | Friday 27 March 2026 00:59:35 +0000 (0:00:04.471) 0:00:06.353 ********** 2026-03-27 01:01:31.513220 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-27 01:01:31.513225 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-27 01:01:31.513229 | orchestrator | 2026-03-27 01:01:31.513234 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-27 01:01:31.513238 | orchestrator | Friday 27 March 2026 00:59:43 +0000 (0:00:07.821) 0:00:14.175 ********** 2026-03-27 01:01:31.513244 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-27 01:01:31.513250 | orchestrator | 2026-03-27 01:01:31.513255 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-27 01:01:31.513259 | orchestrator | Friday 27 March 2026 00:59:47 +0000 (0:00:03.662) 0:00:17.837 ********** 2026-03-27 01:01:31.513264 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-27 01:01:31.513268 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:01:31.513272 | orchestrator | 2026-03-27 01:01:31.513279 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-27 01:01:31.513285 | orchestrator | Friday 27 March 2026 00:59:51 +0000 (0:00:04.757) 0:00:22.595 ********** 2026-03-27 01:01:31.513292 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:01:31.513298 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-27 01:01:31.513304 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-27 01:01:31.513310 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-27 01:01:31.513316 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-27 01:01:31.513322 | orchestrator | 2026-03-27 01:01:31.513328 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-27 01:01:31.513333 | orchestrator | Friday 27 March 2026 01:00:09 +0000 (0:00:17.540) 0:00:40.135 ********** 2026-03-27 01:01:31.513338 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-27 01:01:31.513344 | orchestrator | 2026-03-27 01:01:31.513350 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-27 01:01:31.513356 | orchestrator | Friday 27 March 2026 01:00:13 +0000 (0:00:04.004) 0:00:44.140 ********** 2026-03-27 01:01:31.513365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.513406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.513418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.513422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513452 | orchestrator | 2026-03-27 01:01:31.513456 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-27 01:01:31.513460 | orchestrator | Friday 27 March 2026 01:00:16 +0000 (0:00:03.412) 0:00:47.552 ********** 2026-03-27 01:01:31.513467 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-27 01:01:31.513471 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-27 01:01:31.513475 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-27 01:01:31.513479 | orchestrator | 2026-03-27 01:01:31.513482 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-27 01:01:31.513486 | orchestrator | Friday 27 March 2026 01:00:18 +0000 (0:00:01.708) 0:00:49.261 ********** 2026-03-27 01:01:31.513490 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:01:31.513494 | orchestrator | 2026-03-27 01:01:31.513501 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-27 01:01:31.513505 | orchestrator | Friday 27 March 2026 01:00:18 +0000 (0:00:00.121) 0:00:49.383 ********** 2026-03-27 01:01:31.513509 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:01:31.513513 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:01:31.513517 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:01:31.513520 | orchestrator | 2026-03-27 01:01:31.513524 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-27 01:01:31.513528 | orchestrator | Friday 27 March 2026 01:00:18 +0000 (0:00:00.283) 0:00:49.667 ********** 2026-03-27 01:01:31.513532 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:01:31.513536 | orchestrator | 2026-03-27 01:01:31.513539 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-27 01:01:31.513543 | orchestrator | Friday 27 March 2026 01:00:19 +0000 (0:00:00.736) 0:00:50.404 ********** 2026-03-27 01:01:31.513547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.513561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.513566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.513570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.513600 | orchestrator | 2026-03-27 01:01:31.513604 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-27 01:01:31.513608 | orchestrator | Friday 27 March 2026 01:00:23 +0000 (0:00:03.669) 0:00:54.074 ********** 2026-03-27 01:01:31.513612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 01:01:31.513616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513626 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:01:31.513630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 01:01:31.513646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513655 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:01:31.513659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 01:01:31.513665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513673 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:01:31.513677 | orchestrator | 2026-03-27 01:01:31.513681 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-27 01:01:31.513685 | orchestrator | Friday 27 March 2026 01:00:24 +0000 (0:00:00.752) 0:00:54.826 ********** 2026-03-27 01:01:31.513855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 01:01:31.513865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513873 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:01:31.513877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 01:01:31.513887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513904 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:01:31.513918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 01:01:31.513925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.513989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.514001 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:01:31.514006 | orchestrator | 2026-03-27 01:01:31.514010 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-27 01:01:31.514105 | orchestrator | Friday 27 March 2026 01:00:26 +0000 (0:00:01.983) 0:00:56.809 ********** 2026-03-27 01:01:31.514115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.514122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.514141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.514146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514181 | orchestrator | 2026-03-27 01:01:31.514185 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-27 01:01:31.514189 | orchestrator | Friday 27 March 2026 01:00:30 +0000 (0:00:04.062) 0:01:00.872 ********** 2026-03-27 01:01:31.514193 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:01:31.514197 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:01:31.514200 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:01:31.514204 | orchestrator | 2026-03-27 01:01:31.514208 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-27 01:01:31.514212 | orchestrator | Friday 27 March 2026 01:00:31 +0000 (0:00:01.472) 0:01:02.349 ********** 2026-03-27 01:01:31.514215 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 01:01:31.514219 | orchestrator | 2026-03-27 01:01:31.514226 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-27 01:01:31.514230 | orchestrator | Friday 27 March 2026 01:00:33 +0000 (0:00:01.804) 0:01:04.153 ********** 2026-03-27 01:01:31.514234 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:01:31.514237 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:01:31.514241 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:01:31.514245 | orchestrator | 2026-03-27 01:01:31.514248 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-27 01:01:31.514252 | orchestrator | Friday 27 March 2026 01:00:33 +0000 (0:00:00.510) 0:01:04.664 ********** 2026-03-27 01:01:31.514256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.514260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.514268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.514275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514302 | orchestrator | 2026-03-27 01:01:31.514306 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-27 01:01:31.514310 | orchestrator | Friday 27 March 2026 01:00:44 +0000 (0:00:10.106) 0:01:14.770 ********** 2026-03-27 01:01:31.514318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 01:01:31.514329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.514337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.514346 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:01:31.514353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 01:01:31.514360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.514368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.514378 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:01:31.514389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-27 01:01:31.514395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.514402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:01:31.514407 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:01:31.514411 | orchestrator | 2026-03-27 01:01:31.514415 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-27 01:01:31.514419 | orchestrator | Friday 27 March 2026 01:00:45 +0000 (0:00:01.259) 0:01:16.029 ********** 2026-03-27 01:01:31.514423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.514432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.514439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-27 01:01:31.514443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:01:31.514475 | orchestrator | 2026-03-27 01:01:31.514478 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-27 01:01:31.514482 | orchestrator | Friday 27 March 2026 01:00:49 +0000 (0:00:03.785) 0:01:19.815 ********** 2026-03-27 01:01:31.514486 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:01:31.514490 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:01:31.514494 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:01:31.514498 | orchestrator | 2026-03-27 01:01:31.514502 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-27 01:01:31.514506 | orchestrator | Friday 27 March 2026 01:00:49 +0000 (0:00:00.559) 0:01:20.375 ********** 2026-03-27 01:01:31.514510 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:01:31.514513 | orchestrator | 2026-03-27 01:01:31.514517 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-27 01:01:31.514521 | orchestrator | Friday 27 March 2026 01:00:52 +0000 (0:00:02.395) 0:01:22.771 ********** 2026-03-27 01:01:31.514525 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:01:31.514529 | orchestrator | 2026-03-27 01:01:31.514533 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-27 01:01:31.514536 | orchestrator | Friday 27 March 2026 01:00:54 +0000 (0:00:02.263) 0:01:25.034 ********** 2026-03-27 01:01:31.514540 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:01:31.514544 | orchestrator | 2026-03-27 01:01:31.514548 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-27 01:01:31.514552 | orchestrator | Friday 27 March 2026 01:01:05 +0000 (0:00:10.933) 0:01:35.968 ********** 2026-03-27 01:01:31.514556 | orchestrator | 2026-03-27 01:01:31.514559 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-27 01:01:31.514563 | orchestrator | Friday 27 March 2026 01:01:05 +0000 (0:00:00.181) 0:01:36.150 ********** 2026-03-27 01:01:31.514567 | orchestrator | 2026-03-27 01:01:31.514571 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-27 01:01:31.514574 | orchestrator | Friday 27 March 2026 01:01:05 +0000 (0:00:00.060) 0:01:36.210 ********** 2026-03-27 01:01:31.514578 | orchestrator | 2026-03-27 01:01:31.514583 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-27 01:01:31.514586 | orchestrator | Friday 27 March 2026 01:01:05 +0000 (0:00:00.061) 0:01:36.272 ********** 2026-03-27 01:01:31.514590 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:01:31.514594 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:01:31.514598 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:01:31.514602 | orchestrator | 2026-03-27 01:01:31.514605 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-27 01:01:31.514609 | orchestrator | Friday 27 March 2026 01:01:16 +0000 (0:00:11.219) 0:01:47.491 ********** 2026-03-27 01:01:31.514613 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:01:31.514617 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:01:31.514623 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:01:31.514627 | orchestrator | 2026-03-27 01:01:31.514631 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-27 01:01:31.514635 | orchestrator | Friday 27 March 2026 01:01:23 +0000 (0:00:06.554) 0:01:54.046 ********** 2026-03-27 01:01:31.514638 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:01:31.514643 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:01:31.514647 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:01:31.514650 | orchestrator | 2026-03-27 01:01:31.514654 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:01:31.514658 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 01:01:31.514662 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-27 01:01:31.514666 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-27 01:01:31.514670 | orchestrator | 2026-03-27 01:01:31.514674 | orchestrator | 2026-03-27 01:01:31.514678 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:01:31.514681 | orchestrator | Friday 27 March 2026 01:01:29 +0000 (0:00:06.534) 0:02:00.580 ********** 2026-03-27 01:01:31.514686 | orchestrator | =============================================================================== 2026-03-27 01:01:31.514689 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.54s 2026-03-27 01:01:31.514693 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.22s 2026-03-27 01:01:31.514697 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.93s 2026-03-27 01:01:31.514701 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.11s 2026-03-27 01:01:31.514709 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.82s 2026-03-27 01:01:31.514713 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.56s 2026-03-27 01:01:31.514717 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.53s 2026-03-27 01:01:31.514721 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.76s 2026-03-27 01:01:31.514725 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.47s 2026-03-27 01:01:31.514740 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.06s 2026-03-27 01:01:31.514744 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.00s 2026-03-27 01:01:31.514748 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.79s 2026-03-27 01:01:31.514752 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.67s 2026-03-27 01:01:31.514755 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.66s 2026-03-27 01:01:31.514760 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.41s 2026-03-27 01:01:31.514763 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.40s 2026-03-27 01:01:31.514767 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.26s 2026-03-27 01:01:31.514771 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.98s 2026-03-27 01:01:31.514775 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.80s 2026-03-27 01:01:31.514778 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.71s 2026-03-27 01:01:31.514782 | orchestrator | 2026-03-27 01:01:31 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:31.514786 | orchestrator | 2026-03-27 01:01:31 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:31.514794 | orchestrator | 2026-03-27 01:01:31 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:31.514798 | orchestrator | 2026-03-27 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:34.539511 | orchestrator | 2026-03-27 01:01:34 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:34.539757 | orchestrator | 2026-03-27 01:01:34 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:34.541434 | orchestrator | 2026-03-27 01:01:34 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:34.542096 | orchestrator | 2026-03-27 01:01:34 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:34.542116 | orchestrator | 2026-03-27 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:37.565558 | orchestrator | 2026-03-27 01:01:37 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:37.566053 | orchestrator | 2026-03-27 01:01:37 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:37.566545 | orchestrator | 2026-03-27 01:01:37 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:37.567094 | orchestrator | 2026-03-27 01:01:37 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:37.567117 | orchestrator | 2026-03-27 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:40.600004 | orchestrator | 2026-03-27 01:01:40 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:40.600050 | orchestrator | 2026-03-27 01:01:40 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:40.601369 | orchestrator | 2026-03-27 01:01:40 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:40.601404 | orchestrator | 2026-03-27 01:01:40 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:40.601408 | orchestrator | 2026-03-27 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:43.627973 | orchestrator | 2026-03-27 01:01:43 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:43.628759 | orchestrator | 2026-03-27 01:01:43 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:43.629485 | orchestrator | 2026-03-27 01:01:43 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:43.630251 | orchestrator | 2026-03-27 01:01:43 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:43.630341 | orchestrator | 2026-03-27 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:46.658976 | orchestrator | 2026-03-27 01:01:46 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:46.660983 | orchestrator | 2026-03-27 01:01:46 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:46.661702 | orchestrator | 2026-03-27 01:01:46 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:46.662470 | orchestrator | 2026-03-27 01:01:46 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:46.662565 | orchestrator | 2026-03-27 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:49.693584 | orchestrator | 2026-03-27 01:01:49 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:49.696547 | orchestrator | 2026-03-27 01:01:49 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:49.697471 | orchestrator | 2026-03-27 01:01:49 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:49.700549 | orchestrator | 2026-03-27 01:01:49 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:49.700608 | orchestrator | 2026-03-27 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:52.738483 | orchestrator | 2026-03-27 01:01:52 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:52.738887 | orchestrator | 2026-03-27 01:01:52 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:52.739645 | orchestrator | 2026-03-27 01:01:52 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:52.740325 | orchestrator | 2026-03-27 01:01:52 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:52.740350 | orchestrator | 2026-03-27 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:55.776533 | orchestrator | 2026-03-27 01:01:55 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:55.777324 | orchestrator | 2026-03-27 01:01:55 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:55.778389 | orchestrator | 2026-03-27 01:01:55 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:55.779091 | orchestrator | 2026-03-27 01:01:55 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:55.779117 | orchestrator | 2026-03-27 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:01:58.813051 | orchestrator | 2026-03-27 01:01:58 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:01:58.813418 | orchestrator | 2026-03-27 01:01:58 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:01:58.814173 | orchestrator | 2026-03-27 01:01:58 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:01:58.816430 | orchestrator | 2026-03-27 01:01:58 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:01:58.816488 | orchestrator | 2026-03-27 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:01.858776 | orchestrator | 2026-03-27 01:02:01 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:01.861227 | orchestrator | 2026-03-27 01:02:01 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:02:01.863646 | orchestrator | 2026-03-27 01:02:01 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:01.864983 | orchestrator | 2026-03-27 01:02:01 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:02:01.865015 | orchestrator | 2026-03-27 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:04.905979 | orchestrator | 2026-03-27 01:02:04 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:04.906454 | orchestrator | 2026-03-27 01:02:04 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:02:04.908140 | orchestrator | 2026-03-27 01:02:04 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:04.909176 | orchestrator | 2026-03-27 01:02:04 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:02:04.909214 | orchestrator | 2026-03-27 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:07.950768 | orchestrator | 2026-03-27 01:02:07 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:07.950832 | orchestrator | 2026-03-27 01:02:07 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:02:07.951995 | orchestrator | 2026-03-27 01:02:07 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:07.952849 | orchestrator | 2026-03-27 01:02:07 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:02:07.952878 | orchestrator | 2026-03-27 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:10.993240 | orchestrator | 2026-03-27 01:02:10 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:10.994938 | orchestrator | 2026-03-27 01:02:10 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:02:10.997129 | orchestrator | 2026-03-27 01:02:10 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:10.999783 | orchestrator | 2026-03-27 01:02:10 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state STARTED 2026-03-27 01:02:10.999840 | orchestrator | 2026-03-27 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:14.050584 | orchestrator | 2026-03-27 01:02:14 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:14.052277 | orchestrator | 2026-03-27 01:02:14 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:02:14.055041 | orchestrator | 2026-03-27 01:02:14 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:14.056017 | orchestrator | 2026-03-27 01:02:14 | INFO  | Task 5e6184bd-7ced-457e-91fa-94ac9819f7be is in state STARTED 2026-03-27 01:02:14.058682 | orchestrator | 2026-03-27 01:02:14 | INFO  | Task 19fb0e65-5495-4e90-8a76-702bd6a59570 is in state SUCCESS 2026-03-27 01:02:14.059840 | orchestrator | 2026-03-27 01:02:14.059877 | orchestrator | 2026-03-27 01:02:14.059883 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:02:14.059887 | orchestrator | 2026-03-27 01:02:14.059891 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:02:14.059896 | orchestrator | Friday 27 March 2026 00:59:29 +0000 (0:00:00.410) 0:00:00.410 ********** 2026-03-27 01:02:14.059900 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:02:14.059904 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:02:14.059908 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:02:14.059912 | orchestrator | 2026-03-27 01:02:14.059918 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:02:14.059925 | orchestrator | Friday 27 March 2026 00:59:29 +0000 (0:00:00.332) 0:00:00.743 ********** 2026-03-27 01:02:14.059934 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-27 01:02:14.059942 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-27 01:02:14.059948 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-27 01:02:14.059955 | orchestrator | 2026-03-27 01:02:14.059961 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-27 01:02:14.059967 | orchestrator | 2026-03-27 01:02:14.059973 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-27 01:02:14.059980 | orchestrator | Friday 27 March 2026 00:59:30 +0000 (0:00:00.513) 0:00:01.257 ********** 2026-03-27 01:02:14.059986 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:02:14.059993 | orchestrator | 2026-03-27 01:02:14.059999 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-27 01:02:14.060006 | orchestrator | Friday 27 March 2026 00:59:31 +0000 (0:00:00.957) 0:00:02.215 ********** 2026-03-27 01:02:14.060025 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-27 01:02:14.060030 | orchestrator | 2026-03-27 01:02:14.060034 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-27 01:02:14.060038 | orchestrator | Friday 27 March 2026 00:59:36 +0000 (0:00:04.784) 0:00:07.000 ********** 2026-03-27 01:02:14.060042 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-27 01:02:14.060046 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-27 01:02:14.060049 | orchestrator | 2026-03-27 01:02:14.060053 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-27 01:02:14.060057 | orchestrator | Friday 27 March 2026 00:59:43 +0000 (0:00:07.746) 0:00:14.747 ********** 2026-03-27 01:02:14.060061 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-27 01:02:14.060065 | orchestrator | 2026-03-27 01:02:14.060069 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-27 01:02:14.060072 | orchestrator | Friday 27 March 2026 00:59:47 +0000 (0:00:03.676) 0:00:18.423 ********** 2026-03-27 01:02:14.060076 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-27 01:02:14.060080 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:02:14.060084 | orchestrator | 2026-03-27 01:02:14.060087 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-27 01:02:14.060091 | orchestrator | Friday 27 March 2026 00:59:52 +0000 (0:00:04.450) 0:00:22.874 ********** 2026-03-27 01:02:14.060095 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:02:14.060099 | orchestrator | 2026-03-27 01:02:14.060103 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-27 01:02:14.060113 | orchestrator | Friday 27 March 2026 00:59:56 +0000 (0:00:04.032) 0:00:26.907 ********** 2026-03-27 01:02:14.060117 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-27 01:02:14.060121 | orchestrator | 2026-03-27 01:02:14.060125 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-27 01:02:14.060166 | orchestrator | Friday 27 March 2026 01:00:00 +0000 (0:00:04.775) 0:00:31.682 ********** 2026-03-27 01:02:14.060173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.060189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.060198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.060202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060324 | orchestrator | 2026-03-27 01:02:14.060328 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-27 01:02:14.060332 | orchestrator | Friday 27 March 2026 01:00:04 +0000 (0:00:03.557) 0:00:35.240 ********** 2026-03-27 01:02:14.060336 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:14.060340 | orchestrator | 2026-03-27 01:02:14.060351 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-27 01:02:14.060357 | orchestrator | Friday 27 March 2026 01:00:04 +0000 (0:00:00.114) 0:00:35.354 ********** 2026-03-27 01:02:14.060361 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:14.060365 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:14.060369 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:14.060373 | orchestrator | 2026-03-27 01:02:14.060376 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-27 01:02:14.060386 | orchestrator | Friday 27 March 2026 01:00:04 +0000 (0:00:00.273) 0:00:35.628 ********** 2026-03-27 01:02:14.060393 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:02:14.060399 | orchestrator | 2026-03-27 01:02:14.060405 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-27 01:02:14.060411 | orchestrator | Friday 27 March 2026 01:00:05 +0000 (0:00:00.487) 0:00:36.115 ********** 2026-03-27 01:02:14.060417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.060433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.060441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.060447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060591 | orchestrator | 2026-03-27 01:02:14.060599 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-27 01:02:14.060604 | orchestrator | Friday 27 March 2026 01:00:11 +0000 (0:00:06.536) 0:00:42.652 ********** 2026-03-27 01:02:14.060611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.060638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 01:02:14.060650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060676 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:14.060686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.060697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 01:02:14.060721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.060750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060760 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:14.060767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 01:02:14.060779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060798 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:14.060803 | orchestrator | 2026-03-27 01:02:14.060807 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-27 01:02:14.060810 | orchestrator | Friday 27 March 2026 01:00:13 +0000 (0:00:01.836) 0:00:44.489 ********** 2026-03-27 01:02:14.060819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.060824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 01:02:14.060830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.060838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 01:02:14.060853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060872 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:14.060876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060886 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:14.060892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.060897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 01:02:14.060901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.060922 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:14.060925 | orchestrator | 2026-03-27 01:02:14.060929 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-27 01:02:14.060933 | orchestrator | Friday 27 March 2026 01:00:15 +0000 (0:00:01.829) 0:00:46.318 ********** 2026-03-27 01:02:14.060939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.060944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.060951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.060955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.060996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061064 | orchestrator | 2026-03-27 01:02:14.061068 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-27 01:02:14.061072 | orchestrator | Friday 27 March 2026 01:00:23 +0000 (0:00:07.941) 0:00:54.260 ********** 2026-03-27 01:02:14.061076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.061082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.061087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.061094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend2026-03-27 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:14.061099 | orchestrator | _bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061223 | orchestrator | 2026-03-27 01:02:14.061229 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-27 01:02:14.061235 | orchestrator | Friday 27 March 2026 01:00:44 +0000 (0:00:20.979) 0:01:15.239 ********** 2026-03-27 01:02:14.061242 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-27 01:02:14.061248 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-27 01:02:14.061254 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-27 01:02:14.061261 | orchestrator | 2026-03-27 01:02:14.061268 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-27 01:02:14.061274 | orchestrator | Friday 27 March 2026 01:00:49 +0000 (0:00:04.666) 0:01:19.906 ********** 2026-03-27 01:02:14.061281 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-27 01:02:14.061287 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-27 01:02:14.061293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-27 01:02:14.061300 | orchestrator | 2026-03-27 01:02:14.061306 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-27 01:02:14.061312 | orchestrator | Friday 27 March 2026 01:00:52 +0000 (0:00:02.988) 0:01:22.894 ********** 2026-03-27 01:02:14.061322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.061330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.061342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.061356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061462 | orchestrator | 2026-03-27 01:02:14.061468 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-27 01:02:14.061474 | orchestrator | Friday 27 March 2026 01:00:55 +0000 (0:00:03.167) 0:01:26.062 ********** 2026-03-27 01:02:14.061480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.061493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.061500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.061514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061638 | orchestrator | 2026-03-27 01:02:14.061645 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-27 01:02:14.061652 | orchestrator | Friday 27 March 2026 01:00:58 +0000 (0:00:02.830) 0:01:28.892 ********** 2026-03-27 01:02:14.061656 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:14.061660 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:14.061664 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:14.061668 | orchestrator | 2026-03-27 01:02:14.061671 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-27 01:02:14.061675 | orchestrator | Friday 27 March 2026 01:00:58 +0000 (0:00:00.277) 0:01:29.170 ********** 2026-03-27 01:02:14.061679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.061685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 01:02:14.061692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.061743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 01:02:14.061753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061765 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:14.061769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061784 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:14.061788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-27 01:02:14.061793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-27 01:02:14.061802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:02:14.061820 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:14.061824 | orchestrator | 2026-03-27 01:02:14.061829 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-27 01:02:14.061834 | orchestrator | Friday 27 March 2026 01:00:59 +0000 (0:00:00.935) 0:01:30.106 ********** 2026-03-27 01:02:14.061841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.061848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.061857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-27 01:02:14.061897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:02:14.061982 | orchestrator | 2026-03-27 01:02:14.061986 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-27 01:02:14.061990 | orchestrator | Friday 27 March 2026 01:01:04 +0000 (0:00:04.764) 0:01:34.870 ********** 2026-03-27 01:02:14.061994 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:14.061998 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:14.062001 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:14.062005 | orchestrator | 2026-03-27 01:02:14.062009 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-27 01:02:14.062051 | orchestrator | Friday 27 March 2026 01:01:05 +0000 (0:00:00.914) 0:01:35.784 ********** 2026-03-27 01:02:14.062057 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-27 01:02:14.062061 | orchestrator | 2026-03-27 01:02:14.062065 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-27 01:02:14.062069 | orchestrator | Friday 27 March 2026 01:01:07 +0000 (0:00:02.169) 0:01:37.953 ********** 2026-03-27 01:02:14.062072 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-27 01:02:14.062076 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-27 01:02:14.062083 | orchestrator | 2026-03-27 01:02:14.062088 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-27 01:02:14.062091 | orchestrator | Friday 27 March 2026 01:01:09 +0000 (0:00:02.514) 0:01:40.468 ********** 2026-03-27 01:02:14.062095 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:14.062099 | orchestrator | 2026-03-27 01:02:14.062103 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-27 01:02:14.062107 | orchestrator | Friday 27 March 2026 01:01:24 +0000 (0:00:15.184) 0:01:55.653 ********** 2026-03-27 01:02:14.062111 | orchestrator | 2026-03-27 01:02:14.062114 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-27 01:02:14.062118 | orchestrator | Friday 27 March 2026 01:01:25 +0000 (0:00:00.126) 0:01:55.780 ********** 2026-03-27 01:02:14.062122 | orchestrator | 2026-03-27 01:02:14.062126 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-27 01:02:14.062130 | orchestrator | Friday 27 March 2026 01:01:25 +0000 (0:00:00.122) 0:01:55.903 ********** 2026-03-27 01:02:14.062134 | orchestrator | 2026-03-27 01:02:14.062138 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-27 01:02:14.062141 | orchestrator | Friday 27 March 2026 01:01:25 +0000 (0:00:00.127) 0:01:56.030 ********** 2026-03-27 01:02:14.062145 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:14.062149 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:02:14.062153 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:02:14.062157 | orchestrator | 2026-03-27 01:02:14.062160 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-27 01:02:14.062164 | orchestrator | Friday 27 March 2026 01:01:33 +0000 (0:00:08.172) 0:02:04.203 ********** 2026-03-27 01:02:14.062168 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:14.062172 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:02:14.062176 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:02:14.062179 | orchestrator | 2026-03-27 01:02:14.062183 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-27 01:02:14.062187 | orchestrator | Friday 27 March 2026 01:01:40 +0000 (0:00:07.094) 0:02:11.297 ********** 2026-03-27 01:02:14.062191 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:14.062197 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:02:14.062201 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:02:14.062204 | orchestrator | 2026-03-27 01:02:14.062208 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-27 01:02:14.062212 | orchestrator | Friday 27 March 2026 01:01:46 +0000 (0:00:06.060) 0:02:17.358 ********** 2026-03-27 01:02:14.062216 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:14.062220 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:02:14.062223 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:02:14.062227 | orchestrator | 2026-03-27 01:02:14.062233 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-27 01:02:14.062239 | orchestrator | Friday 27 March 2026 01:01:51 +0000 (0:00:04.795) 0:02:22.153 ********** 2026-03-27 01:02:14.062247 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:14.062256 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:02:14.062261 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:02:14.062267 | orchestrator | 2026-03-27 01:02:14.062272 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-27 01:02:14.062278 | orchestrator | Friday 27 March 2026 01:01:56 +0000 (0:00:05.325) 0:02:27.478 ********** 2026-03-27 01:02:14.062284 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:02:14.062290 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:02:14.062296 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:14.062302 | orchestrator | 2026-03-27 01:02:14.062308 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-27 01:02:14.062314 | orchestrator | Friday 27 March 2026 01:02:04 +0000 (0:00:08.166) 0:02:35.644 ********** 2026-03-27 01:02:14.062320 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:14.062332 | orchestrator | 2026-03-27 01:02:14.062339 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:02:14.062345 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 01:02:14.062351 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-27 01:02:14.062365 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-27 01:02:14.062371 | orchestrator | 2026-03-27 01:02:14.062378 | orchestrator | 2026-03-27 01:02:14.062384 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:02:14.062391 | orchestrator | Friday 27 March 2026 01:02:12 +0000 (0:00:07.352) 0:02:42.997 ********** 2026-03-27 01:02:14.062398 | orchestrator | =============================================================================== 2026-03-27 01:02:14.062404 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.98s 2026-03-27 01:02:14.062409 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.19s 2026-03-27 01:02:14.062413 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.17s 2026-03-27 01:02:14.062416 | orchestrator | designate : Restart designate-worker container -------------------------- 8.17s 2026-03-27 01:02:14.062420 | orchestrator | designate : Copying over config.json files for services ----------------- 7.94s 2026-03-27 01:02:14.062424 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.75s 2026-03-27 01:02:14.062428 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.35s 2026-03-27 01:02:14.062432 | orchestrator | designate : Restart designate-api container ----------------------------- 7.10s 2026-03-27 01:02:14.062436 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.54s 2026-03-27 01:02:14.062442 | orchestrator | designate : Restart designate-central container ------------------------- 6.06s 2026-03-27 01:02:14.062448 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.33s 2026-03-27 01:02:14.062455 | orchestrator | designate : Restart designate-producer container ------------------------ 4.80s 2026-03-27 01:02:14.062461 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.79s 2026-03-27 01:02:14.062468 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.78s 2026-03-27 01:02:14.062474 | orchestrator | designate : Check designate containers ---------------------------------- 4.76s 2026-03-27 01:02:14.062482 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.67s 2026-03-27 01:02:14.062486 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.45s 2026-03-27 01:02:14.062490 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.03s 2026-03-27 01:02:14.062494 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.68s 2026-03-27 01:02:14.062500 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.56s 2026-03-27 01:02:17.096220 | orchestrator | 2026-03-27 01:02:17 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:17.096721 | orchestrator | 2026-03-27 01:02:17 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:02:17.098183 | orchestrator | 2026-03-27 01:02:17 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:17.098906 | orchestrator | 2026-03-27 01:02:17 | INFO  | Task 5e6184bd-7ced-457e-91fa-94ac9819f7be is in state STARTED 2026-03-27 01:02:17.099891 | orchestrator | 2026-03-27 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:20.136372 | orchestrator | 2026-03-27 01:02:20 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:20.141566 | orchestrator | 2026-03-27 01:02:20 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state STARTED 2026-03-27 01:02:20.146065 | orchestrator | 2026-03-27 01:02:20 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:20.146352 | orchestrator | 2026-03-27 01:02:20 | INFO  | Task 5e6184bd-7ced-457e-91fa-94ac9819f7be is in state SUCCESS 2026-03-27 01:02:20.149520 | orchestrator | 2026-03-27 01:02:20 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:20.149568 | orchestrator | 2026-03-27 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:23.191639 | orchestrator | 2026-03-27 01:02:23 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:23.192861 | orchestrator | 2026-03-27 01:02:23 | INFO  | Task bb0a8bc8-6480-486e-b02b-9c133054bafd is in state SUCCESS 2026-03-27 01:02:23.194777 | orchestrator | 2026-03-27 01:02:23.194819 | orchestrator | 2026-03-27 01:02:23.194831 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:02:23.194842 | orchestrator | 2026-03-27 01:02:23.194873 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:02:23.194879 | orchestrator | Friday 27 March 2026 01:02:15 +0000 (0:00:00.176) 0:00:00.176 ********** 2026-03-27 01:02:23.194884 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:02:23.194890 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:02:23.194895 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:02:23.194900 | orchestrator | 2026-03-27 01:02:23.194906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:02:23.194911 | orchestrator | Friday 27 March 2026 01:02:15 +0000 (0:00:00.326) 0:00:00.503 ********** 2026-03-27 01:02:23.194922 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-27 01:02:23.194927 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-27 01:02:23.194932 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-27 01:02:23.194938 | orchestrator | 2026-03-27 01:02:23.194943 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-27 01:02:23.194949 | orchestrator | 2026-03-27 01:02:23.194954 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-27 01:02:23.194959 | orchestrator | Friday 27 March 2026 01:02:15 +0000 (0:00:00.417) 0:00:00.920 ********** 2026-03-27 01:02:23.194964 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:02:23.194970 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:02:23.194976 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:02:23.194989 | orchestrator | 2026-03-27 01:02:23.194999 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:02:23.195005 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:02:23.195011 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:02:23.195016 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:02:23.195022 | orchestrator | 2026-03-27 01:02:23.195027 | orchestrator | 2026-03-27 01:02:23.195032 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:02:23.195037 | orchestrator | Friday 27 March 2026 01:02:16 +0000 (0:00:01.055) 0:00:01.975 ********** 2026-03-27 01:02:23.195043 | orchestrator | =============================================================================== 2026-03-27 01:02:23.195048 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.06s 2026-03-27 01:02:23.195054 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-27 01:02:23.195060 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-27 01:02:23.195078 | orchestrator | 2026-03-27 01:02:23.195083 | orchestrator | 2026-03-27 01:02:23.195088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:02:23.195093 | orchestrator | 2026-03-27 01:02:23.195098 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:02:23.195104 | orchestrator | Friday 27 March 2026 01:01:17 +0000 (0:00:00.894) 0:00:00.894 ********** 2026-03-27 01:02:23.195109 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:02:23.195115 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:02:23.195120 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:02:23.195125 | orchestrator | 2026-03-27 01:02:23.195130 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:02:23.195135 | orchestrator | Friday 27 March 2026 01:01:18 +0000 (0:00:00.528) 0:00:01.422 ********** 2026-03-27 01:02:23.195140 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-27 01:02:23.195146 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-27 01:02:23.195150 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-27 01:02:23.195155 | orchestrator | 2026-03-27 01:02:23.195159 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-27 01:02:23.195164 | orchestrator | 2026-03-27 01:02:23.195169 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-27 01:02:23.195173 | orchestrator | Friday 27 March 2026 01:01:18 +0000 (0:00:00.380) 0:00:01.803 ********** 2026-03-27 01:02:23.195178 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:02:23.195183 | orchestrator | 2026-03-27 01:02:23.195188 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-27 01:02:23.195200 | orchestrator | Friday 27 March 2026 01:01:19 +0000 (0:00:00.800) 0:00:02.603 ********** 2026-03-27 01:02:23.195207 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-27 01:02:23.195212 | orchestrator | 2026-03-27 01:02:23.195217 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-27 01:02:23.195222 | orchestrator | Friday 27 March 2026 01:01:24 +0000 (0:00:05.135) 0:00:07.738 ********** 2026-03-27 01:02:23.195227 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-27 01:02:23.195232 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-27 01:02:23.195237 | orchestrator | 2026-03-27 01:02:23.195243 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-27 01:02:23.195248 | orchestrator | Friday 27 March 2026 01:01:31 +0000 (0:00:06.516) 0:00:14.255 ********** 2026-03-27 01:02:23.195253 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-27 01:02:23.195258 | orchestrator | 2026-03-27 01:02:23.195263 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-27 01:02:23.195268 | orchestrator | Friday 27 March 2026 01:01:34 +0000 (0:00:03.190) 0:00:17.446 ********** 2026-03-27 01:02:23.195282 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-27 01:02:23.195288 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:02:23.195293 | orchestrator | 2026-03-27 01:02:23.195298 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-27 01:02:23.195303 | orchestrator | Friday 27 March 2026 01:01:38 +0000 (0:00:03.597) 0:00:21.043 ********** 2026-03-27 01:02:23.195308 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:02:23.195313 | orchestrator | 2026-03-27 01:02:23.195318 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-27 01:02:23.195323 | orchestrator | Friday 27 March 2026 01:01:41 +0000 (0:00:03.390) 0:00:24.434 ********** 2026-03-27 01:02:23.195328 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-27 01:02:23.195333 | orchestrator | 2026-03-27 01:02:23.195344 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-27 01:02:23.195349 | orchestrator | Friday 27 March 2026 01:01:45 +0000 (0:00:03.934) 0:00:28.369 ********** 2026-03-27 01:02:23.195354 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:23.195359 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:23.195365 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:23.195369 | orchestrator | 2026-03-27 01:02:23.195374 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-27 01:02:23.195379 | orchestrator | Friday 27 March 2026 01:01:45 +0000 (0:00:00.500) 0:00:28.869 ********** 2026-03-27 01:02:23.195386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195403 | orchestrator | 2026-03-27 01:02:23.195407 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-27 01:02:23.195410 | orchestrator | Friday 27 March 2026 01:01:47 +0000 (0:00:01.491) 0:00:30.360 ********** 2026-03-27 01:02:23.195413 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:23.195416 | orchestrator | 2026-03-27 01:02:23.195419 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-27 01:02:23.195422 | orchestrator | Friday 27 March 2026 01:01:47 +0000 (0:00:00.228) 0:00:30.588 ********** 2026-03-27 01:02:23.195425 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:23.195433 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:23.195437 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:23.195440 | orchestrator | 2026-03-27 01:02:23.195443 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-27 01:02:23.195446 | orchestrator | Friday 27 March 2026 01:01:48 +0000 (0:00:00.470) 0:00:31.059 ********** 2026-03-27 01:02:23.195449 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:02:23.195452 | orchestrator | 2026-03-27 01:02:23.195455 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-27 01:02:23.195458 | orchestrator | Friday 27 March 2026 01:01:48 +0000 (0:00:00.616) 0:00:31.676 ********** 2026-03-27 01:02:23.195461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195477 | orchestrator | 2026-03-27 01:02:23.195482 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-27 01:02:23.195487 | orchestrator | Friday 27 March 2026 01:01:49 +0000 (0:00:01.293) 0:00:32.970 ********** 2026-03-27 01:02:23.195496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 01:02:23.195505 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:23.195510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 01:02:23.195516 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:23.195521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 01:02:23.195527 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:23.195532 | orchestrator | 2026-03-27 01:02:23.195537 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-27 01:02:23.195543 | orchestrator | Friday 27 March 2026 01:01:50 +0000 (0:00:00.480) 0:00:33.451 ********** 2026-03-27 01:02:23.195548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 01:02:23.195554 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:23.195560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 01:02:23.195569 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:23.195574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 01:02:23.195579 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:23.195584 | orchestrator | 2026-03-27 01:02:23.195590 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-27 01:02:23.195595 | orchestrator | Friday 27 March 2026 01:01:51 +0000 (0:00:00.602) 0:00:34.053 ********** 2026-03-27 01:02:23.195600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195621 | orchestrator | 2026-03-27 01:02:23.195624 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-27 01:02:23.195627 | orchestrator | Friday 27 March 2026 01:01:52 +0000 (0:00:01.257) 0:00:35.311 ********** 2026-03-27 01:02:23.195636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195656 | orchestrator | 2026-03-27 01:02:23.195661 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-27 01:02:23.195666 | orchestrator | Friday 27 March 2026 01:01:54 +0000 (0:00:02.603) 0:00:37.915 ********** 2026-03-27 01:02:23.195671 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-27 01:02:23.195684 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-27 01:02:23.195688 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-27 01:02:23.195709 | orchestrator | 2026-03-27 01:02:23.195715 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-27 01:02:23.195723 | orchestrator | Friday 27 March 2026 01:01:56 +0000 (0:00:01.360) 0:00:39.276 ********** 2026-03-27 01:02:23.195744 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:23.195749 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:02:23.195754 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:02:23.195759 | orchestrator | 2026-03-27 01:02:23.195764 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-27 01:02:23.195769 | orchestrator | Friday 27 March 2026 01:01:57 +0000 (0:00:01.204) 0:00:40.480 ********** 2026-03-27 01:02:23.195779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 01:02:23.195785 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:02:23.195790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 01:02:23.195807 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:02:23.195816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-27 01:02:23.195822 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:02:23.195832 | orchestrator | 2026-03-27 01:02:23.195837 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-27 01:02:23.195842 | orchestrator | Friday 27 March 2026 01:01:58 +0000 (0:00:00.927) 0:00:41.407 ********** 2026-03-27 01:02:23.195850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-27 01:02:23.195877 | orchestrator | 2026-03-27 01:02:23.195886 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-27 01:02:23.195892 | orchestrator | Friday 27 March 2026 01:01:59 +0000 (0:00:01.108) 0:00:42.516 ********** 2026-03-27 01:02:23.195895 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:23.195898 | orchestrator | 2026-03-27 01:02:23.195901 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-27 01:02:23.195904 | orchestrator | Friday 27 March 2026 01:02:01 +0000 (0:00:02.036) 0:00:44.553 ********** 2026-03-27 01:02:23.195907 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:23.195910 | orchestrator | 2026-03-27 01:02:23.195913 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-27 01:02:23.195916 | orchestrator | Friday 27 March 2026 01:02:03 +0000 (0:00:02.289) 0:00:46.843 ********** 2026-03-27 01:02:23.195920 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:23.195923 | orchestrator | 2026-03-27 01:02:23.195926 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-27 01:02:23.195932 | orchestrator | Friday 27 March 2026 01:02:16 +0000 (0:00:12.180) 0:00:59.024 ********** 2026-03-27 01:02:23.195935 | orchestrator | 2026-03-27 01:02:23.195940 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-27 01:02:23.195945 | orchestrator | Friday 27 March 2026 01:02:16 +0000 (0:00:00.141) 0:00:59.165 ********** 2026-03-27 01:02:23.195950 | orchestrator | 2026-03-27 01:02:23.195954 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-27 01:02:23.195959 | orchestrator | Friday 27 March 2026 01:02:16 +0000 (0:00:00.067) 0:00:59.233 ********** 2026-03-27 01:02:23.195964 | orchestrator | 2026-03-27 01:02:23.195967 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-27 01:02:23.195970 | orchestrator | Friday 27 March 2026 01:02:16 +0000 (0:00:00.064) 0:00:59.297 ********** 2026-03-27 01:02:23.195973 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:02:23.195976 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:02:23.195979 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:02:23.195982 | orchestrator | 2026-03-27 01:02:23.195985 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:02:23.195988 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-27 01:02:23.195992 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 01:02:23.195995 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 01:02:23.195998 | orchestrator | 2026-03-27 01:02:23.196001 | orchestrator | 2026-03-27 01:02:23.196006 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:02:23.196011 | orchestrator | Friday 27 March 2026 01:02:21 +0000 (0:00:04.842) 0:01:04.140 ********** 2026-03-27 01:02:23.196023 | orchestrator | =============================================================================== 2026-03-27 01:02:23.196029 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.18s 2026-03-27 01:02:23.196033 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.52s 2026-03-27 01:02:23.196037 | orchestrator | service-ks-register : placement | Creating services --------------------- 5.14s 2026-03-27 01:02:23.196042 | orchestrator | placement : Restart placement-api container ----------------------------- 4.84s 2026-03-27 01:02:23.196047 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.93s 2026-03-27 01:02:23.196051 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.60s 2026-03-27 01:02:23.196056 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.39s 2026-03-27 01:02:23.196061 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.19s 2026-03-27 01:02:23.196065 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.60s 2026-03-27 01:02:23.196070 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.29s 2026-03-27 01:02:23.196074 | orchestrator | placement : Creating placement databases -------------------------------- 2.04s 2026-03-27 01:02:23.196082 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.49s 2026-03-27 01:02:23.196086 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.36s 2026-03-27 01:02:23.196091 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.29s 2026-03-27 01:02:23.196095 | orchestrator | placement : Copying over config.json files for services ----------------- 1.26s 2026-03-27 01:02:23.196100 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.20s 2026-03-27 01:02:23.196105 | orchestrator | placement : Check placement containers ---------------------------------- 1.11s 2026-03-27 01:02:23.196110 | orchestrator | placement : Copying over existing policy file --------------------------- 0.93s 2026-03-27 01:02:23.196119 | orchestrator | placement : include_tasks ----------------------------------------------- 0.80s 2026-03-27 01:02:23.196122 | orchestrator | placement : include_tasks ----------------------------------------------- 0.62s 2026-03-27 01:02:23.196125 | orchestrator | 2026-03-27 01:02:23 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:23.196193 | orchestrator | 2026-03-27 01:02:23 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:23.196199 | orchestrator | 2026-03-27 01:02:23 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:23.203784 | orchestrator | 2026-03-27 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:26.238477 | orchestrator | 2026-03-27 01:02:26 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:26.238532 | orchestrator | 2026-03-27 01:02:26 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:26.238540 | orchestrator | 2026-03-27 01:02:26 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:26.239441 | orchestrator | 2026-03-27 01:02:26 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:26.239474 | orchestrator | 2026-03-27 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:29.272263 | orchestrator | 2026-03-27 01:02:29 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:29.272319 | orchestrator | 2026-03-27 01:02:29 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:29.273041 | orchestrator | 2026-03-27 01:02:29 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:29.275472 | orchestrator | 2026-03-27 01:02:29 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:29.275526 | orchestrator | 2026-03-27 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:32.404077 | orchestrator | 2026-03-27 01:02:32 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:32.405395 | orchestrator | 2026-03-27 01:02:32 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:32.406042 | orchestrator | 2026-03-27 01:02:32 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:32.406897 | orchestrator | 2026-03-27 01:02:32 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:32.406948 | orchestrator | 2026-03-27 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:35.498914 | orchestrator | 2026-03-27 01:02:35 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:35.499542 | orchestrator | 2026-03-27 01:02:35 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:35.500085 | orchestrator | 2026-03-27 01:02:35 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:35.500774 | orchestrator | 2026-03-27 01:02:35 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:35.500801 | orchestrator | 2026-03-27 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:38.539135 | orchestrator | 2026-03-27 01:02:38 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:38.539644 | orchestrator | 2026-03-27 01:02:38 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:38.540283 | orchestrator | 2026-03-27 01:02:38 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:38.541840 | orchestrator | 2026-03-27 01:02:38 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:38.542658 | orchestrator | 2026-03-27 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:41.569914 | orchestrator | 2026-03-27 01:02:41 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:41.569973 | orchestrator | 2026-03-27 01:02:41 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:41.570888 | orchestrator | 2026-03-27 01:02:41 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:41.571290 | orchestrator | 2026-03-27 01:02:41 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:41.571352 | orchestrator | 2026-03-27 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:44.607963 | orchestrator | 2026-03-27 01:02:44 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:44.609992 | orchestrator | 2026-03-27 01:02:44 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:44.612071 | orchestrator | 2026-03-27 01:02:44 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:44.614216 | orchestrator | 2026-03-27 01:02:44 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:44.614260 | orchestrator | 2026-03-27 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:47.860803 | orchestrator | 2026-03-27 01:02:47 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:47.860857 | orchestrator | 2026-03-27 01:02:47 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:47.860865 | orchestrator | 2026-03-27 01:02:47 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:47.860870 | orchestrator | 2026-03-27 01:02:47 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:47.860877 | orchestrator | 2026-03-27 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:50.666964 | orchestrator | 2026-03-27 01:02:50 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:50.667169 | orchestrator | 2026-03-27 01:02:50 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:50.668094 | orchestrator | 2026-03-27 01:02:50 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:50.669760 | orchestrator | 2026-03-27 01:02:50 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:50.669794 | orchestrator | 2026-03-27 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:53.696385 | orchestrator | 2026-03-27 01:02:53 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:53.696949 | orchestrator | 2026-03-27 01:02:53 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:53.697380 | orchestrator | 2026-03-27 01:02:53 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:53.698043 | orchestrator | 2026-03-27 01:02:53 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:53.698118 | orchestrator | 2026-03-27 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:56.741932 | orchestrator | 2026-03-27 01:02:56 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:56.743030 | orchestrator | 2026-03-27 01:02:56 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:56.743605 | orchestrator | 2026-03-27 01:02:56 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state STARTED 2026-03-27 01:02:56.744266 | orchestrator | 2026-03-27 01:02:56 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:56.744312 | orchestrator | 2026-03-27 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:02:59.793431 | orchestrator | 2026-03-27 01:02:59 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:02:59.793959 | orchestrator | 2026-03-27 01:02:59 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:02:59.794398 | orchestrator | 2026-03-27 01:02:59 | INFO  | Task 3f30abc6-4606-4467-abf2-32f037dd4fd6 is in state SUCCESS 2026-03-27 01:02:59.795231 | orchestrator | 2026-03-27 01:02:59 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:02:59.795254 | orchestrator | 2026-03-27 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:02.828860 | orchestrator | 2026-03-27 01:03:02 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:02.829732 | orchestrator | 2026-03-27 01:03:02 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:02.830808 | orchestrator | 2026-03-27 01:03:02 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:03:02.831654 | orchestrator | 2026-03-27 01:03:02 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:02.831771 | orchestrator | 2026-03-27 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:05.857885 | orchestrator | 2026-03-27 01:03:05 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:05.857932 | orchestrator | 2026-03-27 01:03:05 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:05.859373 | orchestrator | 2026-03-27 01:03:05 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:03:05.859421 | orchestrator | 2026-03-27 01:03:05 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:05.859430 | orchestrator | 2026-03-27 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:08.912370 | orchestrator | 2026-03-27 01:03:08 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:08.913917 | orchestrator | 2026-03-27 01:03:08 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:08.914935 | orchestrator | 2026-03-27 01:03:08 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:03:08.916583 | orchestrator | 2026-03-27 01:03:08 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:08.916628 | orchestrator | 2026-03-27 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:12.003265 | orchestrator | 2026-03-27 01:03:12 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:12.003876 | orchestrator | 2026-03-27 01:03:12 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:12.006185 | orchestrator | 2026-03-27 01:03:12 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:03:12.008834 | orchestrator | 2026-03-27 01:03:12 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:12.008889 | orchestrator | 2026-03-27 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:15.053844 | orchestrator | 2026-03-27 01:03:15 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:15.054435 | orchestrator | 2026-03-27 01:03:15 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:15.057559 | orchestrator | 2026-03-27 01:03:15 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:03:15.058642 | orchestrator | 2026-03-27 01:03:15 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:15.058794 | orchestrator | 2026-03-27 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:18.091986 | orchestrator | 2026-03-27 01:03:18 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:18.093081 | orchestrator | 2026-03-27 01:03:18 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:18.095781 | orchestrator | 2026-03-27 01:03:18 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:03:18.099243 | orchestrator | 2026-03-27 01:03:18 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:18.099294 | orchestrator | 2026-03-27 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:21.144676 | orchestrator | 2026-03-27 01:03:21 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:21.148933 | orchestrator | 2026-03-27 01:03:21 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:21.152630 | orchestrator | 2026-03-27 01:03:21 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:03:21.155743 | orchestrator | 2026-03-27 01:03:21 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:21.155880 | orchestrator | 2026-03-27 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:24.204235 | orchestrator | 2026-03-27 01:03:24 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:24.205980 | orchestrator | 2026-03-27 01:03:24 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:24.207571 | orchestrator | 2026-03-27 01:03:24 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state STARTED 2026-03-27 01:03:24.209032 | orchestrator | 2026-03-27 01:03:24 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:24.209153 | orchestrator | 2026-03-27 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:27.246227 | orchestrator | 2026-03-27 01:03:27 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:27.246926 | orchestrator | 2026-03-27 01:03:27 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:27.247858 | orchestrator | 2026-03-27 01:03:27 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:27.249317 | orchestrator | 2026-03-27 01:03:27 | INFO  | Task 6f660ac2-2339-4f7d-b428-d322de8f0c2c is in state SUCCESS 2026-03-27 01:03:27.250825 | orchestrator | 2026-03-27 01:03:27.250865 | orchestrator | 2026-03-27 01:03:27.250871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:03:27.250877 | orchestrator | 2026-03-27 01:03:27.250883 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:03:27.250889 | orchestrator | Friday 27 March 2026 01:02:26 +0000 (0:00:00.467) 0:00:00.467 ********** 2026-03-27 01:03:27.250895 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:03:27.250902 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:03:27.250918 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:03:27.250921 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:03:27.250925 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:03:27.250930 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:03:27.250935 | orchestrator | ok: [testbed-manager] 2026-03-27 01:03:27.250940 | orchestrator | 2026-03-27 01:03:27.250945 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:03:27.250951 | orchestrator | Friday 27 March 2026 01:02:26 +0000 (0:00:00.949) 0:00:01.417 ********** 2026-03-27 01:03:27.250956 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-27 01:03:27.250962 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-27 01:03:27.250967 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-27 01:03:27.250974 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-27 01:03:27.250977 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-27 01:03:27.250980 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-27 01:03:27.250983 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-27 01:03:27.250986 | orchestrator | 2026-03-27 01:03:27.250989 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-27 01:03:27.250992 | orchestrator | 2026-03-27 01:03:27.250995 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-27 01:03:27.250998 | orchestrator | Friday 27 March 2026 01:02:27 +0000 (0:00:00.771) 0:00:02.189 ********** 2026-03-27 01:03:27.251002 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-27 01:03:27.251008 | orchestrator | 2026-03-27 01:03:27.251013 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-27 01:03:27.251046 | orchestrator | Friday 27 March 2026 01:02:29 +0000 (0:00:01.373) 0:00:03.562 ********** 2026-03-27 01:03:27.251051 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-03-27 01:03:27.251056 | orchestrator | 2026-03-27 01:03:27.251060 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-27 01:03:27.251065 | orchestrator | Friday 27 March 2026 01:02:32 +0000 (0:00:03.852) 0:00:07.415 ********** 2026-03-27 01:03:27.251071 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-27 01:03:27.251077 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-27 01:03:27.251082 | orchestrator | 2026-03-27 01:03:27.251087 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-27 01:03:27.251121 | orchestrator | Friday 27 March 2026 01:02:38 +0000 (0:00:05.625) 0:00:13.040 ********** 2026-03-27 01:03:27.251134 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-27 01:03:27.251140 | orchestrator | 2026-03-27 01:03:27.251146 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-27 01:03:27.251152 | orchestrator | Friday 27 March 2026 01:02:41 +0000 (0:00:03.157) 0:00:16.198 ********** 2026-03-27 01:03:27.251158 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-03-27 01:03:27.251164 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:03:27.251169 | orchestrator | 2026-03-27 01:03:27.251175 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-27 01:03:27.251181 | orchestrator | Friday 27 March 2026 01:02:44 +0000 (0:00:03.187) 0:00:19.385 ********** 2026-03-27 01:03:27.251187 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:03:27.251193 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-03-27 01:03:27.251198 | orchestrator | 2026-03-27 01:03:27.251203 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-27 01:03:27.251210 | orchestrator | Friday 27 March 2026 01:02:51 +0000 (0:00:06.186) 0:00:25.572 ********** 2026-03-27 01:03:27.251222 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-03-27 01:03:27.251227 | orchestrator | 2026-03-27 01:03:27.251232 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:03:27.251237 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:03:27.251242 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:03:27.251247 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:03:27.251259 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:03:27.251264 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:03:27.251278 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:03:27.251281 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:03:27.251285 | orchestrator | 2026-03-27 01:03:27.251288 | orchestrator | 2026-03-27 01:03:27.251291 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:03:27.251294 | orchestrator | Friday 27 March 2026 01:02:58 +0000 (0:00:07.060) 0:00:32.632 ********** 2026-03-27 01:03:27.251298 | orchestrator | =============================================================================== 2026-03-27 01:03:27.251303 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 7.06s 2026-03-27 01:03:27.251308 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.19s 2026-03-27 01:03:27.251313 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.63s 2026-03-27 01:03:27.251318 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.85s 2026-03-27 01:03:27.251323 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.19s 2026-03-27 01:03:27.251328 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.16s 2026-03-27 01:03:27.251333 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.37s 2026-03-27 01:03:27.251338 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2026-03-27 01:03:27.251343 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2026-03-27 01:03:27.251347 | orchestrator | 2026-03-27 01:03:27.251352 | orchestrator | 2026-03-27 01:03:27.251357 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:03:27.251362 | orchestrator | 2026-03-27 01:03:27.251367 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:03:27.251373 | orchestrator | Friday 27 March 2026 01:01:34 +0000 (0:00:00.803) 0:00:00.803 ********** 2026-03-27 01:03:27.251379 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:03:27.251384 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:03:27.251389 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:03:27.251395 | orchestrator | 2026-03-27 01:03:27.251400 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:03:27.251405 | orchestrator | Friday 27 March 2026 01:01:35 +0000 (0:00:00.454) 0:00:01.258 ********** 2026-03-27 01:03:27.251411 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-27 01:03:27.251416 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-27 01:03:27.251422 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-27 01:03:27.251428 | orchestrator | 2026-03-27 01:03:27.251434 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-27 01:03:27.251444 | orchestrator | 2026-03-27 01:03:27.251450 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-27 01:03:27.251456 | orchestrator | Friday 27 March 2026 01:01:35 +0000 (0:00:00.424) 0:00:01.682 ********** 2026-03-27 01:03:27.251461 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:03:27.251466 | orchestrator | 2026-03-27 01:03:27.251472 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-27 01:03:27.251481 | orchestrator | Friday 27 March 2026 01:01:36 +0000 (0:00:01.540) 0:00:03.222 ********** 2026-03-27 01:03:27.251487 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-27 01:03:27.251492 | orchestrator | 2026-03-27 01:03:27.251497 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-27 01:03:27.251502 | orchestrator | Friday 27 March 2026 01:01:40 +0000 (0:00:03.923) 0:00:07.146 ********** 2026-03-27 01:03:27.251508 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-27 01:03:27.251513 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-27 01:03:27.251518 | orchestrator | 2026-03-27 01:03:27.251523 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-27 01:03:27.251529 | orchestrator | Friday 27 March 2026 01:01:47 +0000 (0:00:06.690) 0:00:13.836 ********** 2026-03-27 01:03:27.251533 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-27 01:03:27.251537 | orchestrator | 2026-03-27 01:03:27.251540 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-27 01:03:27.251543 | orchestrator | Friday 27 March 2026 01:01:50 +0000 (0:00:03.253) 0:00:17.090 ********** 2026-03-27 01:03:27.251546 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-27 01:03:27.251549 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:03:27.251552 | orchestrator | 2026-03-27 01:03:27.251555 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-27 01:03:27.251558 | orchestrator | Friday 27 March 2026 01:01:54 +0000 (0:00:03.640) 0:00:20.731 ********** 2026-03-27 01:03:27.251561 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:03:27.251564 | orchestrator | 2026-03-27 01:03:27.251567 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-27 01:03:27.251570 | orchestrator | Friday 27 March 2026 01:01:57 +0000 (0:00:03.113) 0:00:23.845 ********** 2026-03-27 01:03:27.251573 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-27 01:03:27.251576 | orchestrator | 2026-03-27 01:03:27.251579 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-27 01:03:27.251582 | orchestrator | Friday 27 March 2026 01:02:01 +0000 (0:00:03.654) 0:00:27.499 ********** 2026-03-27 01:03:27.251585 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:27.251588 | orchestrator | 2026-03-27 01:03:27.251592 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-27 01:03:27.251598 | orchestrator | Friday 27 March 2026 01:02:04 +0000 (0:00:02.950) 0:00:30.450 ********** 2026-03-27 01:03:27.251601 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:27.251604 | orchestrator | 2026-03-27 01:03:27.251607 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-27 01:03:27.251611 | orchestrator | Friday 27 March 2026 01:02:07 +0000 (0:00:03.680) 0:00:34.131 ********** 2026-03-27 01:03:27.251614 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:27.251617 | orchestrator | 2026-03-27 01:03:27.251620 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-27 01:03:27.251623 | orchestrator | Friday 27 March 2026 01:02:11 +0000 (0:00:03.129) 0:00:37.260 ********** 2026-03-27 01:03:27.251628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.251636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.251657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.251661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.251669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.251675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.251678 | orchestrator | 2026-03-27 01:03:27.251681 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-27 01:03:27.251685 | orchestrator | Friday 27 March 2026 01:02:12 +0000 (0:00:01.362) 0:00:38.622 ********** 2026-03-27 01:03:27.251688 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:27.251691 | orchestrator | 2026-03-27 01:03:27.251694 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-27 01:03:27.251697 | orchestrator | Friday 27 March 2026 01:02:12 +0000 (0:00:00.128) 0:00:38.750 ********** 2026-03-27 01:03:27.251700 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:27.251703 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:27.251706 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:27.251709 | orchestrator | 2026-03-27 01:03:27.251712 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-27 01:03:27.251716 | orchestrator | Friday 27 March 2026 01:02:12 +0000 (0:00:00.240) 0:00:38.991 ********** 2026-03-27 01:03:27.251719 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 01:03:27.251722 | orchestrator | 2026-03-27 01:03:27.251725 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-27 01:03:27.251728 | orchestrator | Friday 27 March 2026 01:02:13 +0000 (0:00:00.856) 0:00:39.848 ********** 2026-03-27 01:03:27.251733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.251737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.251743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.251749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.251752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.251758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.251761 | orchestrator | 2026-03-27 01:03:27.251764 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-27 01:03:27.251767 | orchestrator | Friday 27 March 2026 01:02:15 +0000 (0:00:02.075) 0:00:41.924 ********** 2026-03-27 01:03:27.251770 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:03:27.251774 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:03:27.251777 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:03:27.251780 | orchestrator | 2026-03-27 01:03:27.251783 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-27 01:03:27.251786 | orchestrator | Friday 27 March 2026 01:02:16 +0000 (0:00:00.383) 0:00:42.307 ********** 2026-03-27 01:03:27.251789 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:03:27.251792 | orchestrator | 2026-03-27 01:03:27.251795 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-27 01:03:27.251801 | orchestrator | Friday 27 March 2026 01:02:16 +0000 (0:00:00.514) 0:00:42.822 ********** 2026-03-27 01:03:27.251807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.251810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.251814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.251819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.251822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.251834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.251838 | orchestrator | 2026-03-27 01:03:27.251841 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-27 01:03:27.251844 | orchestrator | Friday 27 March 2026 01:02:18 +0000 (0:00:02.380) 0:00:45.202 ********** 2026-03-27 01:03:27.251847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 01:03:27.251850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:03:27.251853 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:27.251858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 01:03:27.251862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:03:27.251867 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:27.251873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 01:03:27.251876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:03:27.251880 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:27.251883 | orchestrator | 2026-03-27 01:03:27.251886 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-27 01:03:27.251889 | orchestrator | Friday 27 March 2026 01:02:19 +0000 (0:00:00.808) 0:00:46.010 ********** 2026-03-27 01:03:27.251894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 01:03:27.251897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:03:27.251903 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:27.252035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 01:03:27.252043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:03:27.252048 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:27.252053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 01:03:27.252062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:03:27.252067 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:27.252072 | orchestrator | 2026-03-27 01:03:27.252077 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-27 01:03:27.252086 | orchestrator | Friday 27 March 2026 01:02:20 +0000 (0:00:00.809) 0:00:46.820 ********** 2026-03-27 01:03:27.252091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.252099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.252105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.252110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.252117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.252126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.252131 | orchestrator | 2026-03-27 01:03:27.252135 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-27 01:03:27.252140 | orchestrator | Friday 27 March 2026 01:02:22 +0000 (0:00:02.055) 0:00:48.876 ********** 2026-03-27 01:03:27.252148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.252154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.252160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.252171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.252177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.252187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.252192 | orchestrator | 2026-03-27 01:03:27.252197 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-27 01:03:27.252203 | orchestrator | Friday 27 March 2026 01:02:29 +0000 (0:00:07.314) 0:00:56.190 ********** 2026-03-27 01:03:27.252208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 01:03:27.252214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:03:27.252223 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:27.252231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 01:03:27.252237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:03:27.252242 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:27.252248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-27 01:03:27.252251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:03:27.252254 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:27.252257 | orchestrator | 2026-03-27 01:03:27.252260 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-27 01:03:27.252264 | orchestrator | Friday 27 March 2026 01:02:31 +0000 (0:00:01.088) 0:00:57.279 ********** 2026-03-27 01:03:27.252269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.252278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.252286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-27 01:03:27.252291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.252297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.252311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:03:27.252316 | orchestrator | 2026-03-27 01:03:27.252322 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-27 01:03:27.252328 | orchestrator | Friday 27 March 2026 01:02:33 +0000 (0:00:02.313) 0:00:59.593 ********** 2026-03-27 01:03:27.252333 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:27.252338 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:27.252344 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:27.252348 | orchestrator | 2026-03-27 01:03:27.252352 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-27 01:03:27.252355 | orchestrator | Friday 27 March 2026 01:02:33 +0000 (0:00:00.484) 0:01:00.077 ********** 2026-03-27 01:03:27.252358 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:27.252361 | orchestrator | 2026-03-27 01:03:27.252364 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-27 01:03:27.252367 | orchestrator | Friday 27 March 2026 01:02:35 +0000 (0:00:02.035) 0:01:02.113 ********** 2026-03-27 01:03:27.252370 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:27.252373 | orchestrator | 2026-03-27 01:03:27.252377 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-27 01:03:27.252380 | orchestrator | Friday 27 March 2026 01:02:37 +0000 (0:00:01.978) 0:01:04.092 ********** 2026-03-27 01:03:27.252383 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:27.252386 | orchestrator | 2026-03-27 01:03:27.252389 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-27 01:03:27.252392 | orchestrator | Friday 27 March 2026 01:02:51 +0000 (0:00:14.128) 0:01:18.220 ********** 2026-03-27 01:03:27.252395 | orchestrator | 2026-03-27 01:03:27.252398 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-27 01:03:27.252401 | orchestrator | Friday 27 March 2026 01:02:52 +0000 (0:00:00.656) 0:01:18.876 ********** 2026-03-27 01:03:27.252404 | orchestrator | 2026-03-27 01:03:27.252407 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-27 01:03:27.252410 | orchestrator | Friday 27 March 2026 01:02:52 +0000 (0:00:00.168) 0:01:19.045 ********** 2026-03-27 01:03:27.252413 | orchestrator | 2026-03-27 01:03:27.252417 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-27 01:03:27.252420 | orchestrator | Friday 27 March 2026 01:02:52 +0000 (0:00:00.117) 0:01:19.162 ********** 2026-03-27 01:03:27.252423 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:27.252426 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:03:27.252429 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:03:27.252432 | orchestrator | 2026-03-27 01:03:27.252435 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-27 01:03:27.252441 | orchestrator | Friday 27 March 2026 01:03:10 +0000 (0:00:17.658) 0:01:36.821 ********** 2026-03-27 01:03:27.252444 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:03:27.252447 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:27.252450 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:03:27.252453 | orchestrator | 2026-03-27 01:03:27.252456 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:03:27.252462 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-27 01:03:27.252466 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 01:03:27.252469 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 01:03:27.252473 | orchestrator | 2026-03-27 01:03:27.252476 | orchestrator | 2026-03-27 01:03:27.252479 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:03:27.252482 | orchestrator | Friday 27 March 2026 01:03:25 +0000 (0:00:14.563) 0:01:51.385 ********** 2026-03-27 01:03:27.252485 | orchestrator | =============================================================================== 2026-03-27 01:03:27.252488 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.66s 2026-03-27 01:03:27.252491 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.56s 2026-03-27 01:03:27.252494 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.13s 2026-03-27 01:03:27.252497 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.31s 2026-03-27 01:03:27.252500 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.69s 2026-03-27 01:03:27.252503 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.92s 2026-03-27 01:03:27.252507 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.68s 2026-03-27 01:03:27.252510 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.65s 2026-03-27 01:03:27.252513 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.64s 2026-03-27 01:03:27.252516 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.25s 2026-03-27 01:03:27.252519 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.13s 2026-03-27 01:03:27.252522 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.11s 2026-03-27 01:03:27.252525 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.95s 2026-03-27 01:03:27.252528 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.38s 2026-03-27 01:03:27.252531 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.31s 2026-03-27 01:03:27.252534 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.08s 2026-03-27 01:03:27.252539 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.06s 2026-03-27 01:03:27.252542 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.04s 2026-03-27 01:03:27.252545 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.98s 2026-03-27 01:03:27.252548 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.54s 2026-03-27 01:03:27.252551 | orchestrator | 2026-03-27 01:03:27 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:27.252554 | orchestrator | 2026-03-27 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:30.310588 | orchestrator | 2026-03-27 01:03:30 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:30.312276 | orchestrator | 2026-03-27 01:03:30 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:30.312912 | orchestrator | 2026-03-27 01:03:30 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:30.315866 | orchestrator | 2026-03-27 01:03:30 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:30.315917 | orchestrator | 2026-03-27 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:33.358010 | orchestrator | 2026-03-27 01:03:33 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:33.358460 | orchestrator | 2026-03-27 01:03:33 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:33.359197 | orchestrator | 2026-03-27 01:03:33 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:33.359955 | orchestrator | 2026-03-27 01:03:33 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:33.359989 | orchestrator | 2026-03-27 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:36.399891 | orchestrator | 2026-03-27 01:03:36 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:36.401304 | orchestrator | 2026-03-27 01:03:36 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:36.402967 | orchestrator | 2026-03-27 01:03:36 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:36.403576 | orchestrator | 2026-03-27 01:03:36 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:36.403616 | orchestrator | 2026-03-27 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:39.435217 | orchestrator | 2026-03-27 01:03:39 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:39.435862 | orchestrator | 2026-03-27 01:03:39 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:39.438969 | orchestrator | 2026-03-27 01:03:39 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:39.441463 | orchestrator | 2026-03-27 01:03:39 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:39.441528 | orchestrator | 2026-03-27 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:42.479960 | orchestrator | 2026-03-27 01:03:42 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:42.480007 | orchestrator | 2026-03-27 01:03:42 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:42.482072 | orchestrator | 2026-03-27 01:03:42 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:42.482119 | orchestrator | 2026-03-27 01:03:42 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:42.482128 | orchestrator | 2026-03-27 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:45.516418 | orchestrator | 2026-03-27 01:03:45 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:45.518889 | orchestrator | 2026-03-27 01:03:45 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:45.520132 | orchestrator | 2026-03-27 01:03:45 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:45.521020 | orchestrator | 2026-03-27 01:03:45 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:45.521061 | orchestrator | 2026-03-27 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:48.554185 | orchestrator | 2026-03-27 01:03:48 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state STARTED 2026-03-27 01:03:48.555835 | orchestrator | 2026-03-27 01:03:48 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:48.557411 | orchestrator | 2026-03-27 01:03:48 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:48.558053 | orchestrator | 2026-03-27 01:03:48 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:48.558089 | orchestrator | 2026-03-27 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:51.580495 | orchestrator | 2026-03-27 01:03:51.580601 | orchestrator | 2026-03-27 01:03:51 | INFO  | Task fc31f5c5-f773-46a1-8fef-aa147c120287 is in state SUCCESS 2026-03-27 01:03:51.581246 | orchestrator | 2026-03-27 01:03:51.581269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:03:51.581274 | orchestrator | 2026-03-27 01:03:51.581277 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:03:51.581281 | orchestrator | Friday 27 March 2026 00:59:29 +0000 (0:00:00.377) 0:00:00.377 ********** 2026-03-27 01:03:51.581284 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:03:51.581288 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:03:51.581291 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:03:51.581295 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:03:51.581298 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:03:51.581301 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:03:51.581304 | orchestrator | 2026-03-27 01:03:51.581307 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:03:51.581310 | orchestrator | Friday 27 March 2026 00:59:30 +0000 (0:00:01.052) 0:00:01.430 ********** 2026-03-27 01:03:51.581314 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-27 01:03:51.581317 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-27 01:03:51.581320 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-27 01:03:51.581323 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-27 01:03:51.581326 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-27 01:03:51.581330 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-27 01:03:51.581333 | orchestrator | 2026-03-27 01:03:51.581336 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-27 01:03:51.581339 | orchestrator | 2026-03-27 01:03:51.581342 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-27 01:03:51.581345 | orchestrator | Friday 27 March 2026 00:59:31 +0000 (0:00:00.756) 0:00:02.187 ********** 2026-03-27 01:03:51.581349 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 01:03:51.581353 | orchestrator | 2026-03-27 01:03:51.581356 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-27 01:03:51.581359 | orchestrator | Friday 27 March 2026 00:59:32 +0000 (0:00:01.054) 0:00:03.241 ********** 2026-03-27 01:03:51.581363 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:03:51.581366 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:03:51.581369 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:03:51.581375 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:03:51.581380 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:03:51.581385 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:03:51.581389 | orchestrator | 2026-03-27 01:03:51.581394 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-27 01:03:51.581398 | orchestrator | Friday 27 March 2026 00:59:33 +0000 (0:00:01.546) 0:00:04.788 ********** 2026-03-27 01:03:51.581404 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:03:51.581409 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:03:51.581414 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:03:51.581419 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:03:51.581424 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:03:51.581427 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:03:51.581430 | orchestrator | 2026-03-27 01:03:51.581433 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-27 01:03:51.581436 | orchestrator | Friday 27 March 2026 00:59:35 +0000 (0:00:01.249) 0:00:06.038 ********** 2026-03-27 01:03:51.581451 | orchestrator | ok: [testbed-node-0] => { 2026-03-27 01:03:51.581455 | orchestrator |  "changed": false, 2026-03-27 01:03:51.581458 | orchestrator |  "msg": "All assertions passed" 2026-03-27 01:03:51.581462 | orchestrator | } 2026-03-27 01:03:51.581465 | orchestrator | ok: [testbed-node-1] => { 2026-03-27 01:03:51.581468 | orchestrator |  "changed": false, 2026-03-27 01:03:51.581471 | orchestrator |  "msg": "All assertions passed" 2026-03-27 01:03:51.581474 | orchestrator | } 2026-03-27 01:03:51.581477 | orchestrator | ok: [testbed-node-2] => { 2026-03-27 01:03:51.581480 | orchestrator |  "changed": false, 2026-03-27 01:03:51.581483 | orchestrator |  "msg": "All assertions passed" 2026-03-27 01:03:51.581486 | orchestrator | } 2026-03-27 01:03:51.581489 | orchestrator | ok: [testbed-node-3] => { 2026-03-27 01:03:51.581493 | orchestrator |  "changed": false, 2026-03-27 01:03:51.581496 | orchestrator |  "msg": "All assertions passed" 2026-03-27 01:03:51.581499 | orchestrator | } 2026-03-27 01:03:51.581502 | orchestrator | ok: [testbed-node-4] => { 2026-03-27 01:03:51.581505 | orchestrator |  "changed": false, 2026-03-27 01:03:51.581508 | orchestrator |  "msg": "All assertions passed" 2026-03-27 01:03:51.581511 | orchestrator | } 2026-03-27 01:03:51.581514 | orchestrator | ok: [testbed-node-5] => { 2026-03-27 01:03:51.581517 | orchestrator |  "changed": false, 2026-03-27 01:03:51.581520 | orchestrator |  "msg": "All assertions passed" 2026-03-27 01:03:51.581523 | orchestrator | } 2026-03-27 01:03:51.581526 | orchestrator | 2026-03-27 01:03:51.581530 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-27 01:03:51.581534 | orchestrator | Friday 27 March 2026 00:59:35 +0000 (0:00:00.529) 0:00:06.567 ********** 2026-03-27 01:03:51.581539 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.581544 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.581549 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.581554 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.581589 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.581595 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.581600 | orchestrator | 2026-03-27 01:03:51.581606 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-27 01:03:51.581620 | orchestrator | Friday 27 March 2026 00:59:36 +0000 (0:00:00.690) 0:00:07.258 ********** 2026-03-27 01:03:51.581688 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-27 01:03:51.581694 | orchestrator | 2026-03-27 01:03:51.581705 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-27 01:03:51.581710 | orchestrator | Friday 27 March 2026 00:59:40 +0000 (0:00:03.978) 0:00:11.237 ********** 2026-03-27 01:03:51.581716 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-27 01:03:51.581722 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-27 01:03:51.581727 | orchestrator | 2026-03-27 01:03:51.581741 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-27 01:03:51.581747 | orchestrator | Friday 27 March 2026 00:59:47 +0000 (0:00:07.326) 0:00:18.563 ********** 2026-03-27 01:03:51.581761 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-27 01:03:51.581767 | orchestrator | 2026-03-27 01:03:51.581771 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-27 01:03:51.581774 | orchestrator | Friday 27 March 2026 00:59:51 +0000 (0:00:03.583) 0:00:22.147 ********** 2026-03-27 01:03:51.581777 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-27 01:03:51.581780 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:03:51.581783 | orchestrator | 2026-03-27 01:03:51.581786 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-27 01:03:51.581789 | orchestrator | Friday 27 March 2026 00:59:55 +0000 (0:00:04.587) 0:00:26.734 ********** 2026-03-27 01:03:51.581793 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:03:51.581796 | orchestrator | 2026-03-27 01:03:51.581804 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-27 01:03:51.581807 | orchestrator | Friday 27 March 2026 00:59:59 +0000 (0:00:03.764) 0:00:30.499 ********** 2026-03-27 01:03:51.581810 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-27 01:03:51.581813 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-27 01:03:51.581816 | orchestrator | 2026-03-27 01:03:51.581819 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-27 01:03:51.581822 | orchestrator | Friday 27 March 2026 01:00:06 +0000 (0:00:07.438) 0:00:37.938 ********** 2026-03-27 01:03:51.581825 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.581828 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.581832 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.581837 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.581842 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.581847 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.581851 | orchestrator | 2026-03-27 01:03:51.581856 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-27 01:03:51.581861 | orchestrator | Friday 27 March 2026 01:00:07 +0000 (0:00:00.597) 0:00:38.535 ********** 2026-03-27 01:03:51.581894 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.581899 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.581904 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.581910 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.581915 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.581921 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.581926 | orchestrator | 2026-03-27 01:03:51.581932 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-27 01:03:51.581937 | orchestrator | Friday 27 March 2026 01:00:10 +0000 (0:00:02.659) 0:00:41.195 ********** 2026-03-27 01:03:51.581943 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:03:51.581948 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:03:51.581958 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:03:51.581964 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:03:51.581972 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:03:51.581977 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:03:51.581981 | orchestrator | 2026-03-27 01:03:51.581985 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-27 01:03:51.581988 | orchestrator | Friday 27 March 2026 01:00:11 +0000 (0:00:01.287) 0:00:42.482 ********** 2026-03-27 01:03:51.581992 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.581996 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.581999 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.582003 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582007 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582010 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582042 | orchestrator | 2026-03-27 01:03:51.582046 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-27 01:03:51.582049 | orchestrator | Friday 27 March 2026 01:00:15 +0000 (0:00:03.645) 0:00:46.127 ********** 2026-03-27 01:03:51.582055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582134 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582140 | orchestrator | 2026-03-27 01:03:51.582144 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-27 01:03:51.582149 | orchestrator | Friday 27 March 2026 01:00:18 +0000 (0:00:03.386) 0:00:49.514 ********** 2026-03-27 01:03:51.582153 | orchestrator | [WARNING]: Skipped 2026-03-27 01:03:51.582157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-27 01:03:51.582161 | orchestrator | due to this access issue: 2026-03-27 01:03:51.582164 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-27 01:03:51.582168 | orchestrator | a directory 2026-03-27 01:03:51.582172 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 01:03:51.582183 | orchestrator | 2026-03-27 01:03:51.582188 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-27 01:03:51.582196 | orchestrator | Friday 27 March 2026 01:00:19 +0000 (0:00:00.973) 0:00:50.487 ********** 2026-03-27 01:03:51.582203 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 01:03:51.582210 | orchestrator | 2026-03-27 01:03:51.582215 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-27 01:03:51.582221 | orchestrator | Friday 27 March 2026 01:00:20 +0000 (0:00:01.319) 0:00:51.806 ********** 2026-03-27 01:03:51.582226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582261 | orchestrator | 2026-03-27 01:03:51.582264 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-27 01:03:51.582267 | orchestrator | Friday 27 March 2026 01:00:24 +0000 (0:00:03.626) 0:00:55.433 ********** 2026-03-27 01:03:51.582271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582274 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.582277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582283 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.582289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582307 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.582311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582314 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582321 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582330 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582333 | orchestrator | 2026-03-27 01:03:51.582336 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-27 01:03:51.582339 | orchestrator | Friday 27 March 2026 01:00:27 +0000 (0:00:03.064) 0:00:58.498 ********** 2026-03-27 01:03:51.582342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582346 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.582358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582361 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.582365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582368 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582374 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582383 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.582386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582390 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582398 | orchestrator | 2026-03-27 01:03:51.582401 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-27 01:03:51.582405 | orchestrator | Friday 27 March 2026 01:00:30 +0000 (0:00:03.173) 0:01:01.672 ********** 2026-03-27 01:03:51.582408 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.582411 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.582414 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.582417 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582420 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582423 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582426 | orchestrator | 2026-03-27 01:03:51.582430 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-27 01:03:51.582435 | orchestrator | Friday 27 March 2026 01:00:33 +0000 (0:00:02.688) 0:01:04.360 ********** 2026-03-27 01:03:51.582438 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.582441 | orchestrator | 2026-03-27 01:03:51.582445 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-27 01:03:51.582449 | orchestrator | Friday 27 March 2026 01:00:33 +0000 (0:00:00.254) 0:01:04.615 ********** 2026-03-27 01:03:51.582454 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.582458 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.582466 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.582473 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582477 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582482 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582488 | orchestrator | 2026-03-27 01:03:51.582493 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-27 01:03:51.582498 | orchestrator | Friday 27 March 2026 01:00:34 +0000 (0:00:00.611) 0:01:05.226 ********** 2026-03-27 01:03:51.582504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582516 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.582521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582525 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.582530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582535 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.582546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582552 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582566 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582576 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582581 | orchestrator | 2026-03-27 01:03:51.582586 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-27 01:03:51.582591 | orchestrator | Friday 27 March 2026 01:00:37 +0000 (0:00:03.459) 0:01:08.685 ********** 2026-03-27 01:03:51.582596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582661 | orchestrator | 2026-03-27 01:03:51.582666 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-27 01:03:51.582671 | orchestrator | Friday 27 March 2026 01:00:41 +0000 (0:00:04.032) 0:01:12.718 ********** 2026-03-27 01:03:51.582698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.582743 | orchestrator | 2026-03-27 01:03:51.582746 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-27 01:03:51.582749 | orchestrator | Friday 27 March 2026 01:00:48 +0000 (0:00:06.497) 0:01:19.215 ********** 2026-03-27 01:03:51.582756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582762 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.582766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582769 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582775 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.582779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.582782 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.582787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582793 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582802 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582805 | orchestrator | 2026-03-27 01:03:51.582809 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-27 01:03:51.582812 | orchestrator | Friday 27 March 2026 01:00:50 +0000 (0:00:01.995) 0:01:21.211 ********** 2026-03-27 01:03:51.582815 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582818 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582821 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582824 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:03:51.582827 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:51.582830 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:03:51.582834 | orchestrator | 2026-03-27 01:03:51.582837 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-27 01:03:51.582840 | orchestrator | Friday 27 March 2026 01:00:52 +0000 (0:00:02.749) 0:01:23.961 ********** 2026-03-27 01:03:51.582843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582847 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582853 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.582864 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.582883 | orchestrator | 2026-03-27 01:03:51.582886 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-27 01:03:51.582889 | orchestrator | Friday 27 March 2026 01:00:56 +0000 (0:00:03.904) 0:01:27.865 ********** 2026-03-27 01:03:51.582892 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.582895 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.582899 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.582905 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582909 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582918 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582923 | orchestrator | 2026-03-27 01:03:51.582929 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-27 01:03:51.582934 | orchestrator | Friday 27 March 2026 01:00:59 +0000 (0:00:02.310) 0:01:30.176 ********** 2026-03-27 01:03:51.582939 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.582943 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.582948 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.582957 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.582963 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.582968 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.582973 | orchestrator | 2026-03-27 01:03:51.582985 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-27 01:03:51.582989 | orchestrator | Friday 27 March 2026 01:01:01 +0000 (0:00:02.461) 0:01:32.637 ********** 2026-03-27 01:03:51.582994 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.582999 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583004 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583009 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583014 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583019 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583024 | orchestrator | 2026-03-27 01:03:51.583030 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-27 01:03:51.583035 | orchestrator | Friday 27 March 2026 01:01:03 +0000 (0:00:01.935) 0:01:34.573 ********** 2026-03-27 01:03:51.583048 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583053 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583058 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583063 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583066 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583069 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583072 | orchestrator | 2026-03-27 01:03:51.583076 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-27 01:03:51.583079 | orchestrator | Friday 27 March 2026 01:01:05 +0000 (0:00:02.088) 0:01:36.662 ********** 2026-03-27 01:03:51.583082 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583085 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583088 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583092 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583098 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583102 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583105 | orchestrator | 2026-03-27 01:03:51.583108 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-27 01:03:51.583111 | orchestrator | Friday 27 March 2026 01:01:08 +0000 (0:00:02.917) 0:01:39.579 ********** 2026-03-27 01:03:51.583114 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583117 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583120 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583123 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583126 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583130 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583133 | orchestrator | 2026-03-27 01:03:51.583136 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-27 01:03:51.583139 | orchestrator | Friday 27 March 2026 01:01:10 +0000 (0:00:01.802) 0:01:41.381 ********** 2026-03-27 01:03:51.583142 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-27 01:03:51.583145 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583149 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-27 01:03:51.583152 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583155 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-27 01:03:51.583158 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583161 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-27 01:03:51.583164 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583167 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-27 01:03:51.583171 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583174 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-27 01:03:51.583181 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583184 | orchestrator | 2026-03-27 01:03:51.583187 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-27 01:03:51.583190 | orchestrator | Friday 27 March 2026 01:01:12 +0000 (0:00:01.837) 0:01:43.219 ********** 2026-03-27 01:03:51.583194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.583197 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.583206 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.583223 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.583233 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.583248 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.583257 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583261 | orchestrator | 2026-03-27 01:03:51.583267 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-27 01:03:51.583272 | orchestrator | Friday 27 March 2026 01:01:14 +0000 (0:00:02.029) 0:01:45.248 ********** 2026-03-27 01:03:51.583280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.583285 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.583300 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.583315 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.583337 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.583345 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.583354 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583357 | orchestrator | 2026-03-27 01:03:51.583360 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-27 01:03:51.583363 | orchestrator | Friday 27 March 2026 01:01:16 +0000 (0:00:02.000) 0:01:47.249 ********** 2026-03-27 01:03:51.583366 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583372 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583376 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583379 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583382 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583385 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583391 | orchestrator | 2026-03-27 01:03:51.583394 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-27 01:03:51.583397 | orchestrator | Friday 27 March 2026 01:01:19 +0000 (0:00:02.845) 0:01:50.094 ********** 2026-03-27 01:03:51.583400 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583403 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583406 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583410 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:03:51.583416 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:03:51.583420 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:03:51.583425 | orchestrator | 2026-03-27 01:03:51.583429 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-27 01:03:51.583434 | orchestrator | Friday 27 March 2026 01:01:22 +0000 (0:00:03.465) 0:01:53.560 ********** 2026-03-27 01:03:51.583439 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583443 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583448 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583452 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583457 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583483 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583487 | orchestrator | 2026-03-27 01:03:51.583490 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-27 01:03:51.583493 | orchestrator | Friday 27 March 2026 01:01:25 +0000 (0:00:02.541) 0:01:56.102 ********** 2026-03-27 01:03:51.583496 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583499 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583502 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583505 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583508 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583512 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583515 | orchestrator | 2026-03-27 01:03:51.583520 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-27 01:03:51.583525 | orchestrator | Friday 27 March 2026 01:01:28 +0000 (0:00:03.008) 0:01:59.110 ********** 2026-03-27 01:03:51.583530 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583535 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583540 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583545 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583550 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583555 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583560 | orchestrator | 2026-03-27 01:03:51.583565 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-27 01:03:51.583570 | orchestrator | Friday 27 March 2026 01:01:30 +0000 (0:00:02.496) 0:02:01.607 ********** 2026-03-27 01:03:51.583576 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583580 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583583 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583586 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583589 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583592 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583595 | orchestrator | 2026-03-27 01:03:51.583598 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-27 01:03:51.583602 | orchestrator | Friday 27 March 2026 01:01:32 +0000 (0:00:01.996) 0:02:03.604 ********** 2026-03-27 01:03:51.583605 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583608 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583611 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583614 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583617 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583620 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583639 | orchestrator | 2026-03-27 01:03:51.583643 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-27 01:03:51.583646 | orchestrator | Friday 27 March 2026 01:01:35 +0000 (0:00:02.700) 0:02:06.304 ********** 2026-03-27 01:03:51.583654 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583657 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583660 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583663 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583666 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583669 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583673 | orchestrator | 2026-03-27 01:03:51.583676 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-27 01:03:51.583679 | orchestrator | Friday 27 March 2026 01:01:37 +0000 (0:00:02.541) 0:02:08.846 ********** 2026-03-27 01:03:51.583682 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583685 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583688 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583691 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583694 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583697 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583700 | orchestrator | 2026-03-27 01:03:51.583703 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-27 01:03:51.583706 | orchestrator | Friday 27 March 2026 01:01:39 +0000 (0:00:01.755) 0:02:10.601 ********** 2026-03-27 01:03:51.583709 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-27 01:03:51.583715 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583718 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-27 01:03:51.583722 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583725 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-27 01:03:51.583728 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583731 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-27 01:03:51.583734 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583740 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-27 01:03:51.583744 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583747 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-27 01:03:51.583750 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583753 | orchestrator | 2026-03-27 01:03:51.583756 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-27 01:03:51.583759 | orchestrator | Friday 27 March 2026 01:01:41 +0000 (0:00:02.193) 0:02:12.795 ********** 2026-03-27 01:03:51.583762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.583766 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.583775 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-27 01:03:51.583782 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.583790 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.583803 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-27 01:03:51.583817 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583822 | orchestrator | 2026-03-27 01:03:51.583827 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-27 01:03:51.583833 | orchestrator | Friday 27 March 2026 01:01:44 +0000 (0:00:02.692) 0:02:15.487 ********** 2026-03-27 01:03:51.583836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.583840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.583849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.583854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-27 01:03:51.583859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.583869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-27 01:03:51.583874 | orchestrator | 2026-03-27 01:03:51.583881 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-27 01:03:51.583884 | orchestrator | Friday 27 March 2026 01:01:47 +0000 (0:00:02.602) 0:02:18.089 ********** 2026-03-27 01:03:51.583887 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:03:51.583890 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:03:51.583893 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:03:51.583896 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:03:51.583899 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:03:51.583903 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:03:51.583906 | orchestrator | 2026-03-27 01:03:51.583909 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-27 01:03:51.583915 | orchestrator | Friday 27 March 2026 01:01:48 +0000 (0:00:01.023) 0:02:19.112 ********** 2026-03-27 01:03:51.583920 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:51.583925 | orchestrator | 2026-03-27 01:03:51.583930 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-27 01:03:51.583935 | orchestrator | Friday 27 March 2026 01:01:50 +0000 (0:00:02.301) 0:02:21.414 ********** 2026-03-27 01:03:51.583940 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:51.583946 | orchestrator | 2026-03-27 01:03:51.583951 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-27 01:03:51.583956 | orchestrator | Friday 27 March 2026 01:01:52 +0000 (0:00:02.403) 0:02:23.818 ********** 2026-03-27 01:03:51.583962 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:51.583966 | orchestrator | 2026-03-27 01:03:51.583969 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-27 01:03:51.583972 | orchestrator | Friday 27 March 2026 01:02:29 +0000 (0:00:37.007) 0:03:00.825 ********** 2026-03-27 01:03:51.583975 | orchestrator | 2026-03-27 01:03:51.583983 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-27 01:03:51.583986 | orchestrator | Friday 27 March 2026 01:02:30 +0000 (0:00:00.221) 0:03:01.047 ********** 2026-03-27 01:03:51.583989 | orchestrator | 2026-03-27 01:03:51.583992 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-27 01:03:51.583995 | orchestrator | Friday 27 March 2026 01:02:30 +0000 (0:00:00.255) 0:03:01.302 ********** 2026-03-27 01:03:51.583998 | orchestrator | 2026-03-27 01:03:51.584001 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-27 01:03:51.584004 | orchestrator | Friday 27 March 2026 01:02:30 +0000 (0:00:00.161) 0:03:01.464 ********** 2026-03-27 01:03:51.584007 | orchestrator | 2026-03-27 01:03:51.584014 | orchestrator | TASK [neutron : Flush Handlers] ********************2026-03-27 01:03:51 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:51.584021 | orchestrator | 2026-03-27 01:03:51 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:51.584118 | orchestrator | **************************** 2026-03-27 01:03:51.584123 | orchestrator | Friday 27 March 2026 01:02:30 +0000 (0:00:00.169) 0:03:01.634 ********** 2026-03-27 01:03:51.584127 | orchestrator | 2026-03-27 01:03:51.584130 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-27 01:03:51.584133 | orchestrator | Friday 27 March 2026 01:02:30 +0000 (0:00:00.150) 0:03:01.784 ********** 2026-03-27 01:03:51.584136 | orchestrator | 2026-03-27 01:03:51.584139 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-27 01:03:51.584142 | orchestrator | Friday 27 March 2026 01:02:30 +0000 (0:00:00.088) 0:03:01.873 ********** 2026-03-27 01:03:51.584145 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:03:51.584148 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:03:51.584151 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:03:51.584154 | orchestrator | 2026-03-27 01:03:51.584157 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-27 01:03:51.584161 | orchestrator | Friday 27 March 2026 01:02:52 +0000 (0:00:21.611) 0:03:23.484 ********** 2026-03-27 01:03:51.584164 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:03:51.584167 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:03:51.584170 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:03:51.584173 | orchestrator | 2026-03-27 01:03:51.584176 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:03:51.584179 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-27 01:03:51.584183 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-27 01:03:51.584186 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-27 01:03:51.584191 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-27 01:03:51.584197 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-27 01:03:51.584202 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-27 01:03:51.584209 | orchestrator | 2026-03-27 01:03:51.584218 | orchestrator | 2026-03-27 01:03:51.584223 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:03:51.584227 | orchestrator | Friday 27 March 2026 01:03:49 +0000 (0:00:56.630) 0:04:20.115 ********** 2026-03-27 01:03:51.584232 | orchestrator | =============================================================================== 2026-03-27 01:03:51.584237 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 56.63s 2026-03-27 01:03:51.584242 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 37.00s 2026-03-27 01:03:51.584247 | orchestrator | neutron : Restart neutron-server container ----------------------------- 21.61s 2026-03-27 01:03:51.584252 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.44s 2026-03-27 01:03:51.584257 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.33s 2026-03-27 01:03:51.584262 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.50s 2026-03-27 01:03:51.584266 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.59s 2026-03-27 01:03:51.584271 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.03s 2026-03-27 01:03:51.584280 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.98s 2026-03-27 01:03:51.584285 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.90s 2026-03-27 01:03:51.584289 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.76s 2026-03-27 01:03:51.584294 | orchestrator | Setting sysctl values --------------------------------------------------- 3.65s 2026-03-27 01:03:51.584299 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.63s 2026-03-27 01:03:51.584304 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.58s 2026-03-27 01:03:51.584309 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.47s 2026-03-27 01:03:51.584318 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.46s 2026-03-27 01:03:51.584323 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.39s 2026-03-27 01:03:51.584328 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.17s 2026-03-27 01:03:51.584333 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.06s 2026-03-27 01:03:51.584339 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.01s 2026-03-27 01:03:51.584344 | orchestrator | 2026-03-27 01:03:51 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:03:51.584349 | orchestrator | 2026-03-27 01:03:51 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:51.584354 | orchestrator | 2026-03-27 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:54.636434 | orchestrator | 2026-03-27 01:03:54 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:54.636501 | orchestrator | 2026-03-27 01:03:54 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:54.636510 | orchestrator | 2026-03-27 01:03:54 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:03:54.636515 | orchestrator | 2026-03-27 01:03:54 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:54.636521 | orchestrator | 2026-03-27 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:03:57.645433 | orchestrator | 2026-03-27 01:03:57 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:03:57.646271 | orchestrator | 2026-03-27 01:03:57 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:03:57.647171 | orchestrator | 2026-03-27 01:03:57 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:03:57.648265 | orchestrator | 2026-03-27 01:03:57 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:03:57.650189 | orchestrator | 2026-03-27 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:00.696079 | orchestrator | 2026-03-27 01:04:00 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:00.697671 | orchestrator | 2026-03-27 01:04:00 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:00.698421 | orchestrator | 2026-03-27 01:04:00 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:00.699975 | orchestrator | 2026-03-27 01:04:00 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:00.701795 | orchestrator | 2026-03-27 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:03.734698 | orchestrator | 2026-03-27 01:04:03 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:03.735703 | orchestrator | 2026-03-27 01:04:03 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:03.737937 | orchestrator | 2026-03-27 01:04:03 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:03.738883 | orchestrator | 2026-03-27 01:04:03 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:03.738939 | orchestrator | 2026-03-27 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:06.868912 | orchestrator | 2026-03-27 01:04:06 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:06.868954 | orchestrator | 2026-03-27 01:04:06 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:06.868959 | orchestrator | 2026-03-27 01:04:06 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:06.868962 | orchestrator | 2026-03-27 01:04:06 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:06.868965 | orchestrator | 2026-03-27 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:09.876994 | orchestrator | 2026-03-27 01:04:09 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:09.877212 | orchestrator | 2026-03-27 01:04:09 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:09.878956 | orchestrator | 2026-03-27 01:04:09 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:09.879503 | orchestrator | 2026-03-27 01:04:09 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:09.879539 | orchestrator | 2026-03-27 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:12.914592 | orchestrator | 2026-03-27 01:04:12 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:12.915645 | orchestrator | 2026-03-27 01:04:12 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:12.916345 | orchestrator | 2026-03-27 01:04:12 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:12.917162 | orchestrator | 2026-03-27 01:04:12 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:12.917193 | orchestrator | 2026-03-27 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:15.953724 | orchestrator | 2026-03-27 01:04:15 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:15.954079 | orchestrator | 2026-03-27 01:04:15 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:15.954857 | orchestrator | 2026-03-27 01:04:15 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:15.955518 | orchestrator | 2026-03-27 01:04:15 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:15.955632 | orchestrator | 2026-03-27 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:18.997927 | orchestrator | 2026-03-27 01:04:18 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:18.999839 | orchestrator | 2026-03-27 01:04:19 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:19.003138 | orchestrator | 2026-03-27 01:04:19 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:19.004952 | orchestrator | 2026-03-27 01:04:19 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:19.005082 | orchestrator | 2026-03-27 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:22.040283 | orchestrator | 2026-03-27 01:04:22 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:22.041067 | orchestrator | 2026-03-27 01:04:22 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:22.041646 | orchestrator | 2026-03-27 01:04:22 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:22.042569 | orchestrator | 2026-03-27 01:04:22 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:22.042616 | orchestrator | 2026-03-27 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:25.065210 | orchestrator | 2026-03-27 01:04:25 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:25.065778 | orchestrator | 2026-03-27 01:04:25 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:25.066796 | orchestrator | 2026-03-27 01:04:25 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:25.067910 | orchestrator | 2026-03-27 01:04:25 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:25.067938 | orchestrator | 2026-03-27 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:28.105931 | orchestrator | 2026-03-27 01:04:28 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:28.108802 | orchestrator | 2026-03-27 01:04:28 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:28.111133 | orchestrator | 2026-03-27 01:04:28 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:28.113364 | orchestrator | 2026-03-27 01:04:28 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:28.113404 | orchestrator | 2026-03-27 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:31.145997 | orchestrator | 2026-03-27 01:04:31 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:31.146635 | orchestrator | 2026-03-27 01:04:31 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:31.147507 | orchestrator | 2026-03-27 01:04:31 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:31.148180 | orchestrator | 2026-03-27 01:04:31 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:31.148208 | orchestrator | 2026-03-27 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:34.187650 | orchestrator | 2026-03-27 01:04:34 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:34.188118 | orchestrator | 2026-03-27 01:04:34 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:34.188900 | orchestrator | 2026-03-27 01:04:34 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:34.189665 | orchestrator | 2026-03-27 01:04:34 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:34.189687 | orchestrator | 2026-03-27 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:37.240037 | orchestrator | 2026-03-27 01:04:37 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:37.241434 | orchestrator | 2026-03-27 01:04:37 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:37.242115 | orchestrator | 2026-03-27 01:04:37 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:37.243986 | orchestrator | 2026-03-27 01:04:37 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:37.244036 | orchestrator | 2026-03-27 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:40.273260 | orchestrator | 2026-03-27 01:04:40 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:40.273795 | orchestrator | 2026-03-27 01:04:40 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:40.274718 | orchestrator | 2026-03-27 01:04:40 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:40.275651 | orchestrator | 2026-03-27 01:04:40 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:40.275983 | orchestrator | 2026-03-27 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:43.306816 | orchestrator | 2026-03-27 01:04:43 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:43.307200 | orchestrator | 2026-03-27 01:04:43 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:43.308211 | orchestrator | 2026-03-27 01:04:43 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:43.309727 | orchestrator | 2026-03-27 01:04:43 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:43.309753 | orchestrator | 2026-03-27 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:46.344772 | orchestrator | 2026-03-27 01:04:46 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:46.345391 | orchestrator | 2026-03-27 01:04:46 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:46.346144 | orchestrator | 2026-03-27 01:04:46 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:46.346977 | orchestrator | 2026-03-27 01:04:46 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:46.347005 | orchestrator | 2026-03-27 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:49.381128 | orchestrator | 2026-03-27 01:04:49 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:49.381954 | orchestrator | 2026-03-27 01:04:49 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:49.383082 | orchestrator | 2026-03-27 01:04:49 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:49.384485 | orchestrator | 2026-03-27 01:04:49 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:49.384520 | orchestrator | 2026-03-27 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:52.420207 | orchestrator | 2026-03-27 01:04:52 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:52.421772 | orchestrator | 2026-03-27 01:04:52 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:52.423387 | orchestrator | 2026-03-27 01:04:52 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:52.425145 | orchestrator | 2026-03-27 01:04:52 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:52.425193 | orchestrator | 2026-03-27 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:55.458832 | orchestrator | 2026-03-27 01:04:55 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:55.460147 | orchestrator | 2026-03-27 01:04:55 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:55.461561 | orchestrator | 2026-03-27 01:04:55 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:55.462950 | orchestrator | 2026-03-27 01:04:55 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:55.462979 | orchestrator | 2026-03-27 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:04:58.500845 | orchestrator | 2026-03-27 01:04:58 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:04:58.502681 | orchestrator | 2026-03-27 01:04:58 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:04:58.504448 | orchestrator | 2026-03-27 01:04:58 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:04:58.506193 | orchestrator | 2026-03-27 01:04:58 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:04:58.506282 | orchestrator | 2026-03-27 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:01.546792 | orchestrator | 2026-03-27 01:05:01 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:01.548203 | orchestrator | 2026-03-27 01:05:01 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:01.553713 | orchestrator | 2026-03-27 01:05:01 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:01.556608 | orchestrator | 2026-03-27 01:05:01 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:05:01.556741 | orchestrator | 2026-03-27 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:04.601552 | orchestrator | 2026-03-27 01:05:04 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:04.604073 | orchestrator | 2026-03-27 01:05:04 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:04.606423 | orchestrator | 2026-03-27 01:05:04 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:04.608339 | orchestrator | 2026-03-27 01:05:04 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:05:04.608732 | orchestrator | 2026-03-27 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:07.652037 | orchestrator | 2026-03-27 01:05:07 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:07.653315 | orchestrator | 2026-03-27 01:05:07 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:07.654129 | orchestrator | 2026-03-27 01:05:07 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:07.655135 | orchestrator | 2026-03-27 01:05:07 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:05:07.655161 | orchestrator | 2026-03-27 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:10.698666 | orchestrator | 2026-03-27 01:05:10 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:10.699424 | orchestrator | 2026-03-27 01:05:10 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:10.700993 | orchestrator | 2026-03-27 01:05:10 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:10.702696 | orchestrator | 2026-03-27 01:05:10 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:05:10.702726 | orchestrator | 2026-03-27 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:13.745623 | orchestrator | 2026-03-27 01:05:13 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:13.747352 | orchestrator | 2026-03-27 01:05:13 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:13.749979 | orchestrator | 2026-03-27 01:05:13 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:13.751210 | orchestrator | 2026-03-27 01:05:13 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:05:13.751254 | orchestrator | 2026-03-27 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:16.792969 | orchestrator | 2026-03-27 01:05:16 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:16.793285 | orchestrator | 2026-03-27 01:05:16 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:16.794295 | orchestrator | 2026-03-27 01:05:16 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:16.797263 | orchestrator | 2026-03-27 01:05:16 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:05:16.797307 | orchestrator | 2026-03-27 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:19.828663 | orchestrator | 2026-03-27 01:05:19 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:19.829419 | orchestrator | 2026-03-27 01:05:19 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:19.830603 | orchestrator | 2026-03-27 01:05:19 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:19.831220 | orchestrator | 2026-03-27 01:05:19 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:05:19.831239 | orchestrator | 2026-03-27 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:22.878191 | orchestrator | 2026-03-27 01:05:22 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:22.880162 | orchestrator | 2026-03-27 01:05:22 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:22.882583 | orchestrator | 2026-03-27 01:05:22 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:22.884815 | orchestrator | 2026-03-27 01:05:22 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:05:22.885284 | orchestrator | 2026-03-27 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:25.933377 | orchestrator | 2026-03-27 01:05:25 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:25.934879 | orchestrator | 2026-03-27 01:05:25 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:25.937891 | orchestrator | 2026-03-27 01:05:25 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:25.939216 | orchestrator | 2026-03-27 01:05:25 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state STARTED 2026-03-27 01:05:25.939252 | orchestrator | 2026-03-27 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:28.980028 | orchestrator | 2026-03-27 01:05:28 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:28.980311 | orchestrator | 2026-03-27 01:05:28 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:28.981195 | orchestrator | 2026-03-27 01:05:28 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:28.982588 | orchestrator | 2026-03-27 01:05:28 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:28.984790 | orchestrator | 2026-03-27 01:05:28 | INFO  | Task 18db67a8-d45f-49e7-98a4-cb8a23202963 is in state SUCCESS 2026-03-27 01:05:28.986257 | orchestrator | 2026-03-27 01:05:28.986309 | orchestrator | 2026-03-27 01:05:28.986319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:05:28.986326 | orchestrator | 2026-03-27 01:05:28.986333 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:05:28.986339 | orchestrator | Friday 27 March 2026 01:02:20 +0000 (0:00:00.324) 0:00:00.324 ********** 2026-03-27 01:05:28.986346 | orchestrator | ok: [testbed-manager] 2026-03-27 01:05:28.986353 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:05:28.986359 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:05:28.986365 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:05:28.986372 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:05:28.986378 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:05:28.986384 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:05:28.986390 | orchestrator | 2026-03-27 01:05:28.986394 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:05:28.986398 | orchestrator | Friday 27 March 2026 01:02:21 +0000 (0:00:00.695) 0:00:01.019 ********** 2026-03-27 01:05:28.986402 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-27 01:05:28.986407 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-27 01:05:28.986413 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-27 01:05:28.986451 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-27 01:05:28.986457 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-27 01:05:28.986463 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-27 01:05:28.986508 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-27 01:05:28.986513 | orchestrator | 2026-03-27 01:05:28.986517 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-27 01:05:28.986521 | orchestrator | 2026-03-27 01:05:28.986525 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-27 01:05:28.986551 | orchestrator | Friday 27 March 2026 01:02:21 +0000 (0:00:00.746) 0:00:01.766 ********** 2026-03-27 01:05:28.986560 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 01:05:28.986567 | orchestrator | 2026-03-27 01:05:28.986571 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-27 01:05:28.986599 | orchestrator | Friday 27 March 2026 01:02:23 +0000 (0:00:01.227) 0:00:02.993 ********** 2026-03-27 01:05:28.986605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986623 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-27 01:05:28.986672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986695 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986727 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986744 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-27 01:05:28.986756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986870 | orchestrator | 2026-03-27 01:05:28.986877 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-27 01:05:28.986883 | orchestrator | Friday 27 March 2026 01:02:27 +0000 (0:00:04.330) 0:00:07.324 ********** 2026-03-27 01:05:28.986892 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 01:05:28.986899 | orchestrator | 2026-03-27 01:05:28.986905 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-27 01:05:28.986911 | orchestrator | Friday 27 March 2026 01:02:28 +0000 (0:00:01.390) 0:00:08.714 ********** 2026-03-27 01:05:28.986918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986932 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-27 01:05:28.986936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.986982 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.986988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.986994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.987006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.987013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.987019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.987028 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.987042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.987048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.987054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.987060 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.987070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.987077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.987086 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-27 01:05:28.987098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.987105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.987112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.987118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.987331 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.987350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.987357 | orchestrator | 2026-03-27 01:05:28.987363 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-27 01:05:28.987370 | orchestrator | Friday 27 March 2026 01:02:34 +0000 (0:00:05.729) 0:00:14.444 ********** 2026-03-27 01:05:28.987387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987409 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.987414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987443 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.987447 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-27 01:05:28.987452 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987456 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987463 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-27 01:05:28.987471 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987475 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:05:28.987481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987516 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.987522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987526 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.987549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987561 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.987565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987584 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.987588 | orchestrator | 2026-03-27 01:05:28.987592 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-27 01:05:28.987596 | orchestrator | Friday 27 March 2026 01:02:36 +0000 (0:00:01.719) 0:00:16.163 ********** 2026-03-27 01:05:28.987602 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-27 01:05:28.987606 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987610 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987614 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-27 01:05:28.987619 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987708 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:05:28.987717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987721 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.987725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987740 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.987744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-27 01:05:28.987759 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.987765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987797 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.987803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987815 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.987819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-27 01:05:28.987831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-27 01:05:28.987983 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.988003 | orchestrator | 2026-03-27 01:05:28.988010 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-27 01:05:28.988015 | orchestrator | Friday 27 March 2026 01:02:38 +0000 (0:00:02.421) 0:00:18.584 ********** 2026-03-27 01:05:28.988019 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-27 01:05:28.988026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.988030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.988034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.988057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.988062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.988069 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.988073 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.988077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988109 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988172 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-27 01:05:28.988181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.988202 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.988240 | orchestrator | 2026-03-27 01:05:28.988246 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-27 01:05:28.988252 | orchestrator | Friday 27 March 2026 01:02:44 +0000 (0:00:05.394) 0:00:23.979 ********** 2026-03-27 01:05:28.988258 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:05:28.988264 | orchestrator | 2026-03-27 01:05:28.988271 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-27 01:05:28.988280 | orchestrator | Friday 27 March 2026 01:02:44 +0000 (0:00:00.853) 0:00:24.833 ********** 2026-03-27 01:05:28.988286 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1115085, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7996764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988291 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1115085, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7996764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988298 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1115085, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7996764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988306 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1115117, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8061154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988310 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1115085, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7996764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988314 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1115117, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8061154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988320 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1115085, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7996764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.988325 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1115079, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.798998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988329 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1115117, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8061154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988337 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1115085, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7996764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988343 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1115079, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.798998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988347 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1115100, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.802356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988351 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1115079, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.798998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988357 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1115117, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8061154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988361 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1115117, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8061154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988365 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1115100, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.802356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988374 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1115085, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7996764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988378 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1115100, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.802356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988382 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1115079, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.798998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988386 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1115077, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7974179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988392 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1115077, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7974179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988396 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1115100, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.802356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988400 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1115087, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7998888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988408 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1115077, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7974179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988412 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1115117, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8061154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988416 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1115095, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8013184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988420 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1115117, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8061154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.988424 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1115079, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.798998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988430 | orchestrator | skipping: [2026-03-27 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:28.988614 | orchestrator | testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1115077, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7974179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988627 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1115089, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8003025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988648 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1115079, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.798998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988656 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1115087, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7998888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988662 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1115100, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.802356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988668 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1115083, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7991154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988674 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1115087, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7998888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988686 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1115087, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7998888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988696 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1115095, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8013184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988705 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1115100, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.802356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988712 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115115, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.805154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988719 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1115095, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8013184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988726 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1115077, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7974179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988732 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1115079, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.798998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.988743 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1115095, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8013184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988754 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1115077, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7974179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988776 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1115089, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8003025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988783 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1114956, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7628827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988787 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1115089, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8003025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988791 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1115087, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7998888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988795 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1115089, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8003025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988803 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1115087, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7998888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988809 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1115083, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7991154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988815 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1115083, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7991154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988819 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1115095, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8013184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988823 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1115131, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.810248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988827 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1115083, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7991154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988831 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115115, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.805154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988839 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1115095, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8013184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988845 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115115, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.805154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988853 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1115089, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8003025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988857 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1114956, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7628827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988861 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1115100, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.802356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.988865 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1115106, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8035202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988869 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115115, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.805154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988879 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1115083, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7991154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988883 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1115089, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8003025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988888 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1114956, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7628827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988892 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1114956, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7628827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988897 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1115131, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.810248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988900 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115078, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7977896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988905 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1115131, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.810248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988914 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1115083, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7991154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988918 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115115, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.805154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988923 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1115131, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.810248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988927 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1115106, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8035202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988932 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1114959, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7631147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988936 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1115077, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7974179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.988940 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1115094, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988948 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1115106, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8035202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988952 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115115, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.805154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988958 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115078, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7977896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988962 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1114956, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7628827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988966 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1115106, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8035202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988970 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1115091, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.988976 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1114956, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7628827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989381 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115078, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7977896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989402 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115078, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7977896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989409 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1115131, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.810248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989413 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1114959, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7631147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989417 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1115129, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8092635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989421 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.989426 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1115087, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7998888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989435 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1115131, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.810248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989442 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1115094, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989454 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1114959, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7631147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989461 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1115106, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8035202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989469 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1114959, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7631147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989476 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1115091, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989502 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1115106, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8035202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989514 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1115094, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989525 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1115095, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8013184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989651 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115078, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7977896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989663 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115078, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7977896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989667 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1115094, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989671 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1115129, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8092635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989675 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.989684 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1115091, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989688 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1114959, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7631147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989697 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1115091, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989701 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1114959, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7631147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989706 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1115089, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8003025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989710 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1115129, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8092635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989714 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.989718 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1115094, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989726 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1115129, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8092635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989730 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.989734 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1115094, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989741 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1115091, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989745 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1115091, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989750 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1115129, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8092635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989754 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.989758 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1115083, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7991154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989762 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1115129, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8092635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-27 01:05:28.989768 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.989772 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115115, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.805154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989776 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1114956, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7628827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989783 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1115131, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.810248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989787 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1115106, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8035202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989793 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1115078, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7977896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989797 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1114959, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7631147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989803 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1115094, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989807 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1115091, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.800969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989811 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1115129, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.8092635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-27 01:05:28.989814 | orchestrator | 2026-03-27 01:05:28.989818 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-27 01:05:28.989823 | orchestrator | Friday 27 March 2026 01:03:13 +0000 (0:00:28.626) 0:00:53.459 ********** 2026-03-27 01:05:28.989827 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:05:28.989830 | orchestrator | 2026-03-27 01:05:28.989836 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-27 01:05:28.989840 | orchestrator | Friday 27 March 2026 01:03:14 +0000 (0:00:00.781) 0:00:54.240 ********** 2026-03-27 01:05:28.989844 | orchestrator | [WARNING]: Skipped 2026-03-27 01:05:28.989848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989852 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-27 01:05:28.989856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989860 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-27 01:05:28.989864 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-27 01:05:28.989867 | orchestrator | [WARNING]: Skipped 2026-03-27 01:05:28.989877 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989880 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-27 01:05:28.989884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989895 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-27 01:05:28.989899 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:05:28.989903 | orchestrator | [WARNING]: Skipped 2026-03-27 01:05:28.989910 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989914 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-27 01:05:28.989918 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989922 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-27 01:05:28.989931 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 01:05:28.989935 | orchestrator | [WARNING]: Skipped 2026-03-27 01:05:28.989939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989943 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-27 01:05:28.989947 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989950 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-27 01:05:28.989954 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-27 01:05:28.989958 | orchestrator | [WARNING]: Skipped 2026-03-27 01:05:28.989961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989965 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-27 01:05:28.989969 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989973 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-27 01:05:28.989977 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-27 01:05:28.989980 | orchestrator | [WARNING]: Skipped 2026-03-27 01:05:28.989984 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989988 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-27 01:05:28.989992 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.989995 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-27 01:05:28.989999 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-27 01:05:28.990003 | orchestrator | [WARNING]: Skipped 2026-03-27 01:05:28.990007 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.990041 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-27 01:05:28.990047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-27 01:05:28.990052 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-27 01:05:28.990056 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-27 01:05:28.990061 | orchestrator | 2026-03-27 01:05:28.990065 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-27 01:05:28.990070 | orchestrator | Friday 27 March 2026 01:03:16 +0000 (0:00:02.053) 0:00:56.293 ********** 2026-03-27 01:05:28.990074 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-27 01:05:28.990079 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.990083 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-27 01:05:28.990088 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.990092 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-27 01:05:28.990096 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.990101 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-27 01:05:28.990132 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.990137 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-27 01:05:28.990141 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.990146 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-27 01:05:28.990150 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.990154 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-27 01:05:28.990161 | orchestrator | 2026-03-27 01:05:28.990167 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-27 01:05:28.990173 | orchestrator | Friday 27 March 2026 01:03:30 +0000 (0:00:14.003) 0:01:10.297 ********** 2026-03-27 01:05:28.990185 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-27 01:05:28.990195 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.990201 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-27 01:05:28.990208 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.990214 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-27 01:05:28.990221 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-27 01:05:28.990227 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.990235 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.990239 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-27 01:05:28.990244 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.990248 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-27 01:05:28.990252 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.990257 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-27 01:05:28.990261 | orchestrator | 2026-03-27 01:05:28.990265 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-27 01:05:28.990269 | orchestrator | Friday 27 March 2026 01:03:33 +0000 (0:00:03.249) 0:01:13.546 ********** 2026-03-27 01:05:28.990274 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-27 01:05:28.990282 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-27 01:05:28.990286 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.990290 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.990294 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-27 01:05:28.990297 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.990301 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-27 01:05:28.990305 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.990309 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-27 01:05:28.990312 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.990316 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-27 01:05:28.990320 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.990324 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-27 01:05:28.990328 | orchestrator | 2026-03-27 01:05:28.990332 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-27 01:05:28.990335 | orchestrator | Friday 27 March 2026 01:03:35 +0000 (0:00:01.529) 0:01:15.075 ********** 2026-03-27 01:05:28.990339 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:05:28.990343 | orchestrator | 2026-03-27 01:05:28.990346 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-27 01:05:28.990350 | orchestrator | Friday 27 March 2026 01:03:35 +0000 (0:00:00.704) 0:01:15.780 ********** 2026-03-27 01:05:28.990354 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:05:28.990357 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.990361 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.990365 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.990369 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.990375 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.990379 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.990383 | orchestrator | 2026-03-27 01:05:28.990390 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-27 01:05:28.990397 | orchestrator | Friday 27 March 2026 01:03:36 +0000 (0:00:00.877) 0:01:16.657 ********** 2026-03-27 01:05:28.990403 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:05:28.990409 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.990414 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.990421 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.990425 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:28.990428 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:05:28.990432 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:05:28.990436 | orchestrator | 2026-03-27 01:05:28.990440 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-27 01:05:28.990443 | orchestrator | Friday 27 March 2026 01:03:39 +0000 (0:00:02.408) 0:01:19.065 ********** 2026-03-27 01:05:28.990447 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-27 01:05:28.990451 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:05:28.990457 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-27 01:05:28.990463 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.990469 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-27 01:05:28.990475 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.990481 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-27 01:05:28.990488 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-27 01:05:28.990498 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.990504 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.990510 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-27 01:05:28.990517 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.990523 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-27 01:05:28.990573 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.990583 | orchestrator | 2026-03-27 01:05:28.990587 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-27 01:05:28.990590 | orchestrator | Friday 27 March 2026 01:03:41 +0000 (0:00:01.970) 0:01:21.036 ********** 2026-03-27 01:05:28.990594 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-27 01:05:28.990599 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-27 01:05:28.990602 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-27 01:05:28.990606 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.990610 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.990614 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.990618 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-27 01:05:28.990622 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.990629 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-27 01:05:28.990633 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-27 01:05:28.990637 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.990640 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-27 01:05:28.990649 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.990653 | orchestrator | 2026-03-27 01:05:28.990657 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-27 01:05:28.990660 | orchestrator | Friday 27 March 2026 01:03:42 +0000 (0:00:01.845) 0:01:22.881 ********** 2026-03-27 01:05:28.990664 | orchestrator | [WARNING]: Skipped 2026-03-27 01:05:28.990668 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-27 01:05:28.990672 | orchestrator | due to this access issue: 2026-03-27 01:05:28.990676 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-27 01:05:28.990679 | orchestrator | not a directory 2026-03-27 01:05:28.990683 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:05:28.990687 | orchestrator | 2026-03-27 01:05:28.990691 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-27 01:05:28.990695 | orchestrator | Friday 27 March 2026 01:03:44 +0000 (0:00:01.084) 0:01:23.965 ********** 2026-03-27 01:05:28.990698 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:05:28.990702 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.990706 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.990710 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.990714 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.990718 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.990721 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.990725 | orchestrator | 2026-03-27 01:05:28.990729 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-27 01:05:28.990733 | orchestrator | Friday 27 March 2026 01:03:44 +0000 (0:00:00.642) 0:01:24.608 ********** 2026-03-27 01:05:28.990737 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:05:28.990741 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:28.990744 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:28.990748 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:28.990752 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:05:28.990756 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:05:28.990760 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:05:28.990763 | orchestrator | 2026-03-27 01:05:28.990767 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-27 01:05:28.990771 | orchestrator | Friday 27 March 2026 01:03:45 +0000 (0:00:00.747) 0:01:25.355 ********** 2026-03-27 01:05:28.990776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.990781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.990790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.990800 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-27 01:05:28.990804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.990808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.990812 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.990816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-27 01:05:28.990843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990847 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990879 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-27 01:05:28.990884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990900 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-27 01:05:28.990919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-27 01:05:28.990931 | orchestrator | 2026-03-27 01:05:28.990935 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-27 01:05:28.990938 | orchestrator | Friday 27 March 2026 01:03:49 +0000 (0:00:04.586) 0:01:29.941 ********** 2026-03-27 01:05:28.990942 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-27 01:05:28.990946 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:05:28.990950 | orchestrator | 2026-03-27 01:05:28.990954 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-27 01:05:28.990957 | orchestrator | Friday 27 March 2026 01:03:51 +0000 (0:00:01.184) 0:01:31.126 ********** 2026-03-27 01:05:28.990961 | orchestrator | 2026-03-27 01:05:28.990965 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-27 01:05:28.990969 | orchestrator | Friday 27 March 2026 01:03:51 +0000 (0:00:00.112) 0:01:31.238 ********** 2026-03-27 01:05:28.990972 | orchestrator | 2026-03-27 01:05:28.990976 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-27 01:05:28.990980 | orchestrator | Friday 27 March 2026 01:03:51 +0000 (0:00:00.107) 0:01:31.345 ********** 2026-03-27 01:05:28.990983 | orchestrator | 2026-03-27 01:05:28.990987 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-27 01:05:28.990991 | orchestrator | Friday 27 March 2026 01:03:51 +0000 (0:00:00.105) 0:01:31.451 ********** 2026-03-27 01:05:28.990994 | orchestrator | 2026-03-27 01:05:28.991001 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-27 01:05:28.991005 | orchestrator | Friday 27 March 2026 01:03:51 +0000 (0:00:00.112) 0:01:31.563 ********** 2026-03-27 01:05:28.991009 | orchestrator | 2026-03-27 01:05:28.991013 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-27 01:05:28.991016 | orchestrator | Friday 27 March 2026 01:03:51 +0000 (0:00:00.053) 0:01:31.617 ********** 2026-03-27 01:05:28.991020 | orchestrator | 2026-03-27 01:05:28.991024 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-27 01:05:28.991028 | orchestrator | Friday 27 March 2026 01:03:51 +0000 (0:00:00.051) 0:01:31.668 ********** 2026-03-27 01:05:28.991031 | orchestrator | 2026-03-27 01:05:28.991035 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-27 01:05:28.991039 | orchestrator | Friday 27 March 2026 01:03:51 +0000 (0:00:00.068) 0:01:31.736 ********** 2026-03-27 01:05:28.991043 | orchestrator | changed: [testbed-manager] 2026-03-27 01:05:28.991046 | orchestrator | 2026-03-27 01:05:28.991050 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-27 01:05:28.991056 | orchestrator | Friday 27 March 2026 01:04:06 +0000 (0:00:14.419) 0:01:46.155 ********** 2026-03-27 01:05:28.991060 | orchestrator | changed: [testbed-manager] 2026-03-27 01:05:28.991064 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:05:28.991068 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:28.991072 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:05:28.991075 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:05:28.991079 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:05:28.991083 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:05:28.991087 | orchestrator | 2026-03-27 01:05:28.991090 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-27 01:05:28.991094 | orchestrator | Friday 27 March 2026 01:04:21 +0000 (0:00:15.609) 0:02:01.765 ********** 2026-03-27 01:05:28.991098 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:05:28.991102 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:05:28.991105 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:28.991109 | orchestrator | 2026-03-27 01:05:28.991113 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-27 01:05:28.991117 | orchestrator | Friday 27 March 2026 01:04:32 +0000 (0:00:10.811) 0:02:12.577 ********** 2026-03-27 01:05:28.991121 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:05:28.991124 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:05:28.991128 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:28.991132 | orchestrator | 2026-03-27 01:05:28.991136 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-27 01:05:28.991139 | orchestrator | Friday 27 March 2026 01:04:44 +0000 (0:00:11.397) 0:02:23.975 ********** 2026-03-27 01:05:28.991143 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:05:28.991147 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:05:28.991150 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:05:28.991154 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:05:28.991158 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:28.991164 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:05:28.991168 | orchestrator | changed: [testbed-manager] 2026-03-27 01:05:28.991171 | orchestrator | 2026-03-27 01:05:28.991175 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-27 01:05:28.991179 | orchestrator | Friday 27 March 2026 01:04:57 +0000 (0:00:13.786) 0:02:37.761 ********** 2026-03-27 01:05:28.991183 | orchestrator | changed: [testbed-manager] 2026-03-27 01:05:28.991187 | orchestrator | 2026-03-27 01:05:28.991190 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-27 01:05:28.991194 | orchestrator | Friday 27 March 2026 01:05:08 +0000 (0:00:10.215) 0:02:47.977 ********** 2026-03-27 01:05:28.991198 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:05:28.991202 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:05:28.991206 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:28.991212 | orchestrator | 2026-03-27 01:05:28.991216 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-27 01:05:28.991220 | orchestrator | Friday 27 March 2026 01:05:12 +0000 (0:00:04.448) 0:02:52.426 ********** 2026-03-27 01:05:28.991224 | orchestrator | changed: [testbed-manager] 2026-03-27 01:05:28.991228 | orchestrator | 2026-03-27 01:05:28.991232 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-27 01:05:28.991236 | orchestrator | Friday 27 March 2026 01:05:17 +0000 (0:00:04.913) 0:02:57.340 ********** 2026-03-27 01:05:28.991240 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:05:28.991243 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:05:28.991247 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:05:28.991251 | orchestrator | 2026-03-27 01:05:28.991255 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:05:28.991259 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-27 01:05:28.991263 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-27 01:05:28.991267 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-27 01:05:28.991271 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-27 01:05:28.991275 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-27 01:05:28.991279 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-27 01:05:28.991282 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-27 01:05:28.991286 | orchestrator | 2026-03-27 01:05:28.991293 | orchestrator | 2026-03-27 01:05:28.991307 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:05:28.991316 | orchestrator | Friday 27 March 2026 01:05:27 +0000 (0:00:10.063) 0:03:07.404 ********** 2026-03-27 01:05:28.991321 | orchestrator | =============================================================================== 2026-03-27 01:05:28.991327 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.63s 2026-03-27 01:05:28.991333 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.61s 2026-03-27 01:05:28.991339 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.42s 2026-03-27 01:05:28.991345 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.00s 2026-03-27 01:05:28.991351 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.79s 2026-03-27 01:05:28.991361 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.40s 2026-03-27 01:05:28.991367 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.81s 2026-03-27 01:05:28.991373 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 10.22s 2026-03-27 01:05:28.991379 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.06s 2026-03-27 01:05:28.991385 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.73s 2026-03-27 01:05:28.991391 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.40s 2026-03-27 01:05:28.991397 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.91s 2026-03-27 01:05:28.991403 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.59s 2026-03-27 01:05:28.991415 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.45s 2026-03-27 01:05:28.991421 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.33s 2026-03-27 01:05:28.991428 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.25s 2026-03-27 01:05:28.991434 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.42s 2026-03-27 01:05:28.991440 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.41s 2026-03-27 01:05:28.991446 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.05s 2026-03-27 01:05:28.991452 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.97s 2026-03-27 01:05:32.021468 | orchestrator | 2026-03-27 01:05:32 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:32.021521 | orchestrator | 2026-03-27 01:05:32 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:32.022324 | orchestrator | 2026-03-27 01:05:32 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:32.023030 | orchestrator | 2026-03-27 01:05:32 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:32.023075 | orchestrator | 2026-03-27 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:35.055578 | orchestrator | 2026-03-27 01:05:35 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:35.055904 | orchestrator | 2026-03-27 01:05:35 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:35.057198 | orchestrator | 2026-03-27 01:05:35 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:35.057853 | orchestrator | 2026-03-27 01:05:35 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:35.058050 | orchestrator | 2026-03-27 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:38.109234 | orchestrator | 2026-03-27 01:05:38 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:38.110144 | orchestrator | 2026-03-27 01:05:38 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:38.110682 | orchestrator | 2026-03-27 01:05:38 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:38.111483 | orchestrator | 2026-03-27 01:05:38 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:38.111587 | orchestrator | 2026-03-27 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:41.149429 | orchestrator | 2026-03-27 01:05:41 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:41.151030 | orchestrator | 2026-03-27 01:05:41 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:41.151782 | orchestrator | 2026-03-27 01:05:41 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:41.153317 | orchestrator | 2026-03-27 01:05:41 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:41.153353 | orchestrator | 2026-03-27 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:44.181100 | orchestrator | 2026-03-27 01:05:44 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:44.184426 | orchestrator | 2026-03-27 01:05:44 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:44.186721 | orchestrator | 2026-03-27 01:05:44 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:44.186798 | orchestrator | 2026-03-27 01:05:44 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:44.186809 | orchestrator | 2026-03-27 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:47.219306 | orchestrator | 2026-03-27 01:05:47 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:47.219935 | orchestrator | 2026-03-27 01:05:47 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:47.220689 | orchestrator | 2026-03-27 01:05:47 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:47.222504 | orchestrator | 2026-03-27 01:05:47 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:47.222636 | orchestrator | 2026-03-27 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:50.268384 | orchestrator | 2026-03-27 01:05:50 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:50.272542 | orchestrator | 2026-03-27 01:05:50 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:50.275572 | orchestrator | 2026-03-27 01:05:50 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:50.277663 | orchestrator | 2026-03-27 01:05:50 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:50.278789 | orchestrator | 2026-03-27 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:53.327803 | orchestrator | 2026-03-27 01:05:53 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:53.330051 | orchestrator | 2026-03-27 01:05:53 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:53.331771 | orchestrator | 2026-03-27 01:05:53 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:53.333995 | orchestrator | 2026-03-27 01:05:53 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:53.334072 | orchestrator | 2026-03-27 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:56.379677 | orchestrator | 2026-03-27 01:05:56 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state STARTED 2026-03-27 01:05:56.382412 | orchestrator | 2026-03-27 01:05:56 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:56.388822 | orchestrator | 2026-03-27 01:05:56 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:56.388867 | orchestrator | 2026-03-27 01:05:56 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:56.388871 | orchestrator | 2026-03-27 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:05:59.427665 | orchestrator | 2026-03-27 01:05:59 | INFO  | Task f3b5e2d3-be9c-412a-b59c-1127764e86e8 is in state SUCCESS 2026-03-27 01:05:59.429423 | orchestrator | 2026-03-27 01:05:59.429479 | orchestrator | 2026-03-27 01:05:59.429488 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:05:59.429509 | orchestrator | 2026-03-27 01:05:59.429516 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:05:59.429522 | orchestrator | Friday 27 March 2026 01:03:04 +0000 (0:00:00.401) 0:00:00.401 ********** 2026-03-27 01:05:59.429528 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:05:59.429535 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:05:59.429542 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:05:59.429548 | orchestrator | 2026-03-27 01:05:59.429554 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:05:59.429561 | orchestrator | Friday 27 March 2026 01:03:04 +0000 (0:00:00.320) 0:00:00.722 ********** 2026-03-27 01:05:59.429582 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-27 01:05:59.429589 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-27 01:05:59.429595 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-27 01:05:59.429602 | orchestrator | 2026-03-27 01:05:59.429608 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-27 01:05:59.429615 | orchestrator | 2026-03-27 01:05:59.429628 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-27 01:05:59.429635 | orchestrator | Friday 27 March 2026 01:03:04 +0000 (0:00:00.262) 0:00:00.984 ********** 2026-03-27 01:05:59.429641 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:05:59.429648 | orchestrator | 2026-03-27 01:05:59.429654 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-27 01:05:59.429660 | orchestrator | Friday 27 March 2026 01:03:05 +0000 (0:00:00.590) 0:00:01.575 ********** 2026-03-27 01:05:59.429667 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-27 01:05:59.429674 | orchestrator | 2026-03-27 01:05:59.429680 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-27 01:05:59.429687 | orchestrator | Friday 27 March 2026 01:03:09 +0000 (0:00:04.148) 0:00:05.724 ********** 2026-03-27 01:05:59.429693 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-27 01:05:59.429700 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-27 01:05:59.429706 | orchestrator | 2026-03-27 01:05:59.429712 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-27 01:05:59.429719 | orchestrator | Friday 27 March 2026 01:03:16 +0000 (0:00:07.038) 0:00:12.762 ********** 2026-03-27 01:05:59.429729 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-27 01:05:59.429736 | orchestrator | 2026-03-27 01:05:59.429743 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-27 01:05:59.429749 | orchestrator | Friday 27 March 2026 01:03:20 +0000 (0:00:03.437) 0:00:16.200 ********** 2026-03-27 01:05:59.429755 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-27 01:05:59.429762 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:05:59.429769 | orchestrator | 2026-03-27 01:05:59.429775 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-27 01:05:59.429782 | orchestrator | Friday 27 March 2026 01:03:23 +0000 (0:00:03.768) 0:00:19.969 ********** 2026-03-27 01:05:59.429788 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:05:59.429795 | orchestrator | 2026-03-27 01:05:59.429802 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-27 01:05:59.429808 | orchestrator | Friday 27 March 2026 01:03:27 +0000 (0:00:03.263) 0:00:23.233 ********** 2026-03-27 01:05:59.429815 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-27 01:05:59.429821 | orchestrator | 2026-03-27 01:05:59.429827 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-27 01:05:59.429834 | orchestrator | Friday 27 March 2026 01:03:30 +0000 (0:00:03.432) 0:00:26.666 ********** 2026-03-27 01:05:59.429864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.429879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.429889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.429900 | orchestrator | 2026-03-27 01:05:59.429907 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-27 01:05:59.429914 | orchestrator | Friday 27 March 2026 01:03:35 +0000 (0:00:04.436) 0:00:31.102 ********** 2026-03-27 01:05:59.429921 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:05:59.429928 | orchestrator | 2026-03-27 01:05:59.429934 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-27 01:05:59.429947 | orchestrator | Friday 27 March 2026 01:03:35 +0000 (0:00:00.606) 0:00:31.709 ********** 2026-03-27 01:05:59.429954 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:05:59.429961 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:59.429967 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:05:59.429974 | orchestrator | 2026-03-27 01:05:59.429981 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-27 01:05:59.429988 | orchestrator | Friday 27 March 2026 01:03:39 +0000 (0:00:04.182) 0:00:35.891 ********** 2026-03-27 01:05:59.429994 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-27 01:05:59.430001 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-27 01:05:59.430008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-27 01:05:59.430046 | orchestrator | 2026-03-27 01:05:59.430054 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-27 01:05:59.430060 | orchestrator | Friday 27 March 2026 01:03:42 +0000 (0:00:02.631) 0:00:38.523 ********** 2026-03-27 01:05:59.430067 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-27 01:05:59.430074 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-27 01:05:59.430081 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-27 01:05:59.430088 | orchestrator | 2026-03-27 01:05:59.430095 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-27 01:05:59.430102 | orchestrator | Friday 27 March 2026 01:03:43 +0000 (0:00:01.353) 0:00:39.876 ********** 2026-03-27 01:05:59.430108 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:05:59.430115 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:05:59.430121 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:05:59.430128 | orchestrator | 2026-03-27 01:05:59.430135 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-27 01:05:59.430142 | orchestrator | Friday 27 March 2026 01:03:44 +0000 (0:00:00.658) 0:00:40.535 ********** 2026-03-27 01:05:59.430148 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430155 | orchestrator | 2026-03-27 01:05:59.430161 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-27 01:05:59.430168 | orchestrator | Friday 27 March 2026 01:03:44 +0000 (0:00:00.102) 0:00:40.637 ********** 2026-03-27 01:05:59.430174 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430180 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430187 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430193 | orchestrator | 2026-03-27 01:05:59.430200 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-27 01:05:59.430207 | orchestrator | Friday 27 March 2026 01:03:44 +0000 (0:00:00.251) 0:00:40.889 ********** 2026-03-27 01:05:59.430214 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:05:59.430225 | orchestrator | 2026-03-27 01:05:59.430232 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-27 01:05:59.430239 | orchestrator | Friday 27 March 2026 01:03:45 +0000 (0:00:00.609) 0:00:41.498 ********** 2026-03-27 01:05:59.430250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.430264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.430275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.430287 | orchestrator | 2026-03-27 01:05:59.430294 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-27 01:05:59.430300 | orchestrator | Friday 27 March 2026 01:03:50 +0000 (0:00:04.627) 0:00:46.126 ********** 2026-03-27 01:05:59.430315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-27 01:05:59.430322 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-27 01:05:59.430341 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-27 01:05:59.430364 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430371 | orchestrator | 2026-03-27 01:05:59.430377 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-27 01:05:59.430384 | orchestrator | Friday 27 March 2026 01:03:53 +0000 (0:00:03.137) 0:00:49.263 ********** 2026-03-27 01:05:59.430391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-27 01:05:59.430403 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-27 01:05:59.430421 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-27 01:05:59.430446 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430453 | orchestrator | 2026-03-27 01:05:59.430460 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-27 01:05:59.430466 | orchestrator | Friday 27 March 2026 01:03:57 +0000 (0:00:04.611) 0:00:53.874 ********** 2026-03-27 01:05:59.430472 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430479 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430485 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430491 | orchestrator | 2026-03-27 01:05:59.430560 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-27 01:05:59.430568 | orchestrator | Friday 27 March 2026 01:04:02 +0000 (0:00:04.560) 0:00:58.435 ********** 2026-03-27 01:05:59.430582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.430596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.430614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.430622 | orchestrator | 2026-03-27 01:05:59.430629 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-27 01:05:59.430636 | orchestrator | Friday 27 March 2026 01:04:07 +0000 (0:00:05.007) 0:01:03.442 ********** 2026-03-27 01:05:59.430643 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:05:59.430650 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:59.430657 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:05:59.430663 | orchestrator | 2026-03-27 01:05:59.430669 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-27 01:05:59.430676 | orchestrator | Friday 27 March 2026 01:04:17 +0000 (0:00:10.011) 0:01:13.453 ********** 2026-03-27 01:05:59.430682 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430689 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430695 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430702 | orchestrator | 2026-03-27 01:05:59.430708 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-27 01:05:59.430714 | orchestrator | Friday 27 March 2026 01:04:20 +0000 (0:00:02.858) 0:01:16.311 ********** 2026-03-27 01:05:59.430721 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430728 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430734 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430741 | orchestrator | 2026-03-27 01:05:59.430747 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-27 01:05:59.430754 | orchestrator | Friday 27 March 2026 01:04:24 +0000 (0:00:04.288) 0:01:20.600 ********** 2026-03-27 01:05:59.430760 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430767 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430777 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430784 | orchestrator | 2026-03-27 01:05:59.430790 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-27 01:05:59.430797 | orchestrator | Friday 27 March 2026 01:04:27 +0000 (0:00:03.111) 0:01:23.712 ********** 2026-03-27 01:05:59.430803 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430811 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430821 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430828 | orchestrator | 2026-03-27 01:05:59.430834 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-27 01:05:59.430841 | orchestrator | Friday 27 March 2026 01:04:31 +0000 (0:00:03.465) 0:01:27.178 ********** 2026-03-27 01:05:59.430848 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430854 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430859 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430864 | orchestrator | 2026-03-27 01:05:59.430869 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-27 01:05:59.430876 | orchestrator | Friday 27 March 2026 01:04:31 +0000 (0:00:00.335) 0:01:27.513 ********** 2026-03-27 01:05:59.430883 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-27 01:05:59.430890 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430896 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-27 01:05:59.430902 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430908 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-27 01:05:59.430914 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430920 | orchestrator | 2026-03-27 01:05:59.430927 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-27 01:05:59.430933 | orchestrator | Friday 27 March 2026 01:04:36 +0000 (0:00:04.898) 0:01:32.412 ********** 2026-03-27 01:05:59.430939 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430946 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430952 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430958 | orchestrator | 2026-03-27 01:05:59.430964 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-03-27 01:05:59.430971 | orchestrator | Friday 27 March 2026 01:04:40 +0000 (0:00:03.827) 0:01:36.239 ********** 2026-03-27 01:05:59.430977 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.430984 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.430991 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.430997 | orchestrator | 2026-03-27 01:05:59.431003 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-27 01:05:59.431009 | orchestrator | Friday 27 March 2026 01:04:43 +0000 (0:00:02.949) 0:01:39.188 ********** 2026-03-27 01:05:59.431020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.431035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.431043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-27 01:05:59.431050 | orchestrator | 2026-03-27 01:05:59.431056 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-27 01:05:59.431065 | orchestrator | Friday 27 March 2026 01:04:50 +0000 (0:00:07.495) 0:01:46.683 ********** 2026-03-27 01:05:59.431071 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:05:59.431077 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:05:59.431084 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:05:59.431090 | orchestrator | 2026-03-27 01:05:59.431096 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-27 01:05:59.431107 | orchestrator | Friday 27 March 2026 01:04:51 +0000 (0:00:00.324) 0:01:47.008 ********** 2026-03-27 01:05:59.431113 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:59.431119 | orchestrator | 2026-03-27 01:05:59.431126 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-27 01:05:59.431131 | orchestrator | Friday 27 March 2026 01:04:53 +0000 (0:00:02.078) 0:01:49.086 ********** 2026-03-27 01:05:59.431138 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:59.431144 | orchestrator | 2026-03-27 01:05:59.431151 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-27 01:05:59.431157 | orchestrator | Friday 27 March 2026 01:04:55 +0000 (0:00:02.276) 0:01:51.362 ********** 2026-03-27 01:05:59.431164 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:59.431170 | orchestrator | 2026-03-27 01:05:59.431176 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-27 01:05:59.431182 | orchestrator | Friday 27 March 2026 01:04:57 +0000 (0:00:01.944) 0:01:53.307 ********** 2026-03-27 01:05:59.431188 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:59.431194 | orchestrator | 2026-03-27 01:05:59.431200 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-27 01:05:59.431206 | orchestrator | Friday 27 March 2026 01:05:24 +0000 (0:00:27.203) 0:02:20.510 ********** 2026-03-27 01:05:59.431211 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:59.431218 | orchestrator | 2026-03-27 01:05:59.431227 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-27 01:05:59.431233 | orchestrator | Friday 27 March 2026 01:05:26 +0000 (0:00:01.918) 0:02:22.429 ********** 2026-03-27 01:05:59.431239 | orchestrator | 2026-03-27 01:05:59.431245 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-27 01:05:59.431251 | orchestrator | Friday 27 March 2026 01:05:26 +0000 (0:00:00.066) 0:02:22.496 ********** 2026-03-27 01:05:59.431257 | orchestrator | 2026-03-27 01:05:59.431262 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-27 01:05:59.431268 | orchestrator | Friday 27 March 2026 01:05:26 +0000 (0:00:00.059) 0:02:22.556 ********** 2026-03-27 01:05:59.431274 | orchestrator | 2026-03-27 01:05:59.431279 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-27 01:05:59.431285 | orchestrator | Friday 27 March 2026 01:05:26 +0000 (0:00:00.065) 0:02:22.621 ********** 2026-03-27 01:05:59.431291 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:05:59.431297 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:05:59.431303 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:05:59.431309 | orchestrator | 2026-03-27 01:05:59.431315 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:05:59.431321 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-03-27 01:05:59.431328 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-27 01:05:59.431334 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-27 01:05:59.431340 | orchestrator | 2026-03-27 01:05:59.431345 | orchestrator | 2026-03-27 01:05:59.431351 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:05:59.431357 | orchestrator | Friday 27 March 2026 01:05:59 +0000 (0:00:32.489) 0:02:55.111 ********** 2026-03-27 01:05:59.431363 | orchestrator | =============================================================================== 2026-03-27 01:05:59.431369 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.49s 2026-03-27 01:05:59.431375 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.20s 2026-03-27 01:05:59.431381 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 10.01s 2026-03-27 01:05:59.431391 | orchestrator | glance : Check glance containers ---------------------------------------- 7.50s 2026-03-27 01:05:59.431397 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.04s 2026-03-27 01:05:59.431404 | orchestrator | glance : Copying over config.json files for services -------------------- 5.01s 2026-03-27 01:05:59.431410 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.90s 2026-03-27 01:05:59.431416 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.63s 2026-03-27 01:05:59.431422 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.61s 2026-03-27 01:05:59.431428 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.56s 2026-03-27 01:05:59.431434 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.44s 2026-03-27 01:05:59.431439 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.29s 2026-03-27 01:05:59.431445 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.18s 2026-03-27 01:05:59.431451 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.15s 2026-03-27 01:05:59.431457 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.83s 2026-03-27 01:05:59.431463 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.77s 2026-03-27 01:05:59.431472 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.47s 2026-03-27 01:05:59.431478 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.44s 2026-03-27 01:05:59.431484 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.43s 2026-03-27 01:05:59.431490 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.26s 2026-03-27 01:05:59.431508 | orchestrator | 2026-03-27 01:05:59 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:05:59.432774 | orchestrator | 2026-03-27 01:05:59 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:05:59.434374 | orchestrator | 2026-03-27 01:05:59 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:05:59.434428 | orchestrator | 2026-03-27 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:02.468735 | orchestrator | 2026-03-27 01:06:02 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:02.469450 | orchestrator | 2026-03-27 01:06:02 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:02.470644 | orchestrator | 2026-03-27 01:06:02 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:06:02.471612 | orchestrator | 2026-03-27 01:06:02 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:02.471644 | orchestrator | 2026-03-27 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:05.508903 | orchestrator | 2026-03-27 01:06:05 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:05.511192 | orchestrator | 2026-03-27 01:06:05 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:05.514137 | orchestrator | 2026-03-27 01:06:05 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:06:05.516806 | orchestrator | 2026-03-27 01:06:05 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:05.517420 | orchestrator | 2026-03-27 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:08.565708 | orchestrator | 2026-03-27 01:06:08 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:08.568714 | orchestrator | 2026-03-27 01:06:08 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:08.570988 | orchestrator | 2026-03-27 01:06:08 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state STARTED 2026-03-27 01:06:08.572889 | orchestrator | 2026-03-27 01:06:08 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:08.572936 | orchestrator | 2026-03-27 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:11.625191 | orchestrator | 2026-03-27 01:06:11 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:11.626860 | orchestrator | 2026-03-27 01:06:11 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:11.630144 | orchestrator | 2026-03-27 01:06:11 | INFO  | Task c00ce638-79e0-4637-8057-e5dc57d0cc73 is in state SUCCESS 2026-03-27 01:06:11.632218 | orchestrator | 2026-03-27 01:06:11.632279 | orchestrator | 2026-03-27 01:06:11.632289 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:06:11.632296 | orchestrator | 2026-03-27 01:06:11.632302 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:06:11.632308 | orchestrator | Friday 27 March 2026 01:03:28 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-03-27 01:06:11.632313 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:06:11.632321 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:06:11.632326 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:06:11.632332 | orchestrator | 2026-03-27 01:06:11.632338 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:06:11.632344 | orchestrator | Friday 27 March 2026 01:03:28 +0000 (0:00:00.260) 0:00:00.537 ********** 2026-03-27 01:06:11.632351 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-27 01:06:11.632358 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-27 01:06:11.632364 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-27 01:06:11.632402 | orchestrator | 2026-03-27 01:06:11.632409 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-27 01:06:11.632416 | orchestrator | 2026-03-27 01:06:11.632424 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-27 01:06:11.632432 | orchestrator | Friday 27 March 2026 01:03:29 +0000 (0:00:00.244) 0:00:00.781 ********** 2026-03-27 01:06:11.632460 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:06:11.632467 | orchestrator | 2026-03-27 01:06:11.632475 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-27 01:06:11.632510 | orchestrator | Friday 27 March 2026 01:03:29 +0000 (0:00:00.577) 0:00:01.359 ********** 2026-03-27 01:06:11.632517 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-27 01:06:11.632523 | orchestrator | 2026-03-27 01:06:11.632539 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-27 01:06:11.632545 | orchestrator | Friday 27 March 2026 01:03:33 +0000 (0:00:03.689) 0:00:05.049 ********** 2026-03-27 01:06:11.632552 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-27 01:06:11.632558 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-27 01:06:11.632564 | orchestrator | 2026-03-27 01:06:11.632569 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-27 01:06:11.632576 | orchestrator | Friday 27 March 2026 01:03:39 +0000 (0:00:06.613) 0:00:11.662 ********** 2026-03-27 01:06:11.632658 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-27 01:06:11.632666 | orchestrator | 2026-03-27 01:06:11.632673 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-27 01:06:11.632680 | orchestrator | Friday 27 March 2026 01:03:43 +0000 (0:00:03.305) 0:00:14.967 ********** 2026-03-27 01:06:11.632686 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-27 01:06:11.632708 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:06:11.632715 | orchestrator | 2026-03-27 01:06:11.632740 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-27 01:06:11.632747 | orchestrator | Friday 27 March 2026 01:03:47 +0000 (0:00:03.997) 0:00:18.965 ********** 2026-03-27 01:06:11.632753 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:06:11.632759 | orchestrator | 2026-03-27 01:06:11.632782 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-27 01:06:11.632787 | orchestrator | Friday 27 March 2026 01:03:50 +0000 (0:00:03.544) 0:00:22.510 ********** 2026-03-27 01:06:11.632793 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-27 01:06:11.632798 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-27 01:06:11.632804 | orchestrator | 2026-03-27 01:06:11.632809 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-27 01:06:11.632816 | orchestrator | Friday 27 March 2026 01:03:56 +0000 (0:00:06.238) 0:00:28.749 ********** 2026-03-27 01:06:11.632824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.632886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.632895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.632906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.632918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.632925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.632932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.632944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.632951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.632961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.632971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.632978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.632986 | orchestrator | 2026-03-27 01:06:11.632993 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-27 01:06:11.633000 | orchestrator | Friday 27 March 2026 01:03:59 +0000 (0:00:02.661) 0:00:31.411 ********** 2026-03-27 01:06:11.633007 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:06:11.633013 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:06:11.633020 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:06:11.633026 | orchestrator | 2026-03-27 01:06:11.633033 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-27 01:06:11.633039 | orchestrator | Friday 27 March 2026 01:04:00 +0000 (0:00:00.372) 0:00:31.783 ********** 2026-03-27 01:06:11.633045 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:06:11.633050 | orchestrator | 2026-03-27 01:06:11.633056 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-27 01:06:11.633061 | orchestrator | Friday 27 March 2026 01:04:00 +0000 (0:00:00.752) 0:00:32.536 ********** 2026-03-27 01:06:11.633071 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-27 01:06:11.633076 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-27 01:06:11.633082 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-27 01:06:11.633087 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-27 01:06:11.633093 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-27 01:06:11.633099 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-27 01:06:11.633105 | orchestrator | 2026-03-27 01:06:11.633110 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-27 01:06:11.633116 | orchestrator | Friday 27 March 2026 01:04:03 +0000 (0:00:02.314) 0:00:34.851 ********** 2026-03-27 01:06:11.633127 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-27 01:06:11.633138 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-27 01:06:11.633144 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-27 01:06:11.633150 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-27 01:06:11.633161 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-27 01:06:11.633166 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-27 01:06:11.633178 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-27 01:06:11.633184 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-27 01:06:11.633190 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-27 01:06:11.633199 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-27 01:06:11.633205 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-27 01:06:11.633217 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-27 01:06:11.633223 | orchestrator | 2026-03-27 01:06:11.633229 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-27 01:06:11.633235 | orchestrator | Friday 27 March 2026 01:04:07 +0000 (0:00:04.136) 0:00:38.987 ********** 2026-03-27 01:06:11.633240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-27 01:06:11.633246 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-27 01:06:11.633252 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-27 01:06:11.633258 | orchestrator | 2026-03-27 01:06:11.633263 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-27 01:06:11.633269 | orchestrator | Friday 27 March 2026 01:04:10 +0000 (0:00:03.323) 0:00:42.310 ********** 2026-03-27 01:06:11.633274 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-27 01:06:11.633280 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-27 01:06:11.633286 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-27 01:06:11.633292 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-27 01:06:11.633311 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-27 01:06:11.633317 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-27 01:06:11.633323 | orchestrator | 2026-03-27 01:06:11.633328 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-27 01:06:11.633334 | orchestrator | Friday 27 March 2026 01:04:14 +0000 (0:00:04.050) 0:00:46.365 ********** 2026-03-27 01:06:11.633340 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-27 01:06:11.633345 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-27 01:06:11.633350 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-27 01:06:11.633356 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-27 01:06:11.633361 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-27 01:06:11.633366 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-27 01:06:11.633371 | orchestrator | 2026-03-27 01:06:11.633376 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-27 01:06:11.633381 | orchestrator | Friday 27 March 2026 01:04:15 +0000 (0:00:01.182) 0:00:47.547 ********** 2026-03-27 01:06:11.633389 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:06:11.633395 | orchestrator | 2026-03-27 01:06:11.633401 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-27 01:06:11.633407 | orchestrator | Friday 27 March 2026 01:04:15 +0000 (0:00:00.119) 0:00:47.667 ********** 2026-03-27 01:06:11.633422 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:06:11.633428 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:06:11.633433 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:06:11.633439 | orchestrator | 2026-03-27 01:06:11.633444 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-27 01:06:11.633449 | orchestrator | Friday 27 March 2026 01:04:16 +0000 (0:00:00.443) 0:00:48.110 ********** 2026-03-27 01:06:11.633455 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:06:11.633461 | orchestrator | 2026-03-27 01:06:11.633467 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-27 01:06:11.633478 | orchestrator | Friday 27 March 2026 01:04:16 +0000 (0:00:00.508) 0:00:48.619 ********** 2026-03-27 01:06:11.633547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.633559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.633565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.633571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.633581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.633592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.633599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.633609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.633615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.633621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.633631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.633930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.633952 | orchestrator | 2026-03-27 01:06:11.633960 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-27 01:06:11.633967 | orchestrator | Friday 27 March 2026 01:04:20 +0000 (0:00:03.310) 0:00:51.930 ********** 2026-03-27 01:06:11.633980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 01:06:11.633987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.633995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634047 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:06:11.634063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 01:06:11.634071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634097 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:06:11.634105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 01:06:11.634116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634143 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:06:11.634150 | orchestrator | 2026-03-27 01:06:11.634159 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-27 01:06:11.634164 | orchestrator | Friday 27 March 2026 01:04:21 +0000 (0:00:01.092) 0:00:53.022 ********** 2026-03-27 01:06:11.634169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 01:06:11.634178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634198 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:06:11.634203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 01:06:11.634211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 01:06:11.634217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634308 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:06:11.634313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634323 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:06:11.634328 | orchestrator | 2026-03-27 01:06:11.634333 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-27 01:06:11.634339 | orchestrator | Friday 27 March 2026 01:04:22 +0000 (0:00:01.337) 0:00:54.360 ********** 2026-03-27 01:06:11.634367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.634373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.634383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.634394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634455 | orchestrator | 2026-03-27 01:06:11.634460 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-27 01:06:11.634465 | orchestrator | Friday 27 March 2026 01:04:26 +0000 (0:00:04.377) 0:00:58.737 ********** 2026-03-27 01:06:11.634470 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-27 01:06:11.634476 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-27 01:06:11.634481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-27 01:06:11.634501 | orchestrator | 2026-03-27 01:06:11.634506 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-27 01:06:11.634512 | orchestrator | Friday 27 March 2026 01:04:28 +0000 (0:00:01.778) 0:01:00.516 ********** 2026-03-27 01:06:11.634520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.634526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.634533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.634542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634600 | orchestrator | 2026-03-27 01:06:11.634605 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-27 01:06:11.634609 | orchestrator | Friday 27 March 2026 01:04:41 +0000 (0:00:12.997) 0:01:13.514 ********** 2026-03-27 01:06:11.634615 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:06:11.634620 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:06:11.634625 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:06:11.634630 | orchestrator | 2026-03-27 01:06:11.634635 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-03-27 01:06:11.634642 | orchestrator | Friday 27 March 2026 01:04:43 +0000 (0:00:01.528) 0:01:15.043 ********** 2026-03-27 01:06:11.634647 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:06:11.634651 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:06:11.634656 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:06:11.634661 | orchestrator | 2026-03-27 01:06:11.634692 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-27 01:06:11.634698 | orchestrator | Friday 27 March 2026 01:04:45 +0000 (0:00:01.807) 0:01:16.850 ********** 2026-03-27 01:06:11.634703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 01:06:11.634715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634732 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:06:11.634740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 01:06:11.634746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634768 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:06:11.634773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-27 01:06:11.634778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-27 01:06:11.634801 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:06:11.634806 | orchestrator | 2026-03-27 01:06:11.634812 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-27 01:06:11.634817 | orchestrator | Friday 27 March 2026 01:04:47 +0000 (0:00:02.798) 0:01:19.649 ********** 2026-03-27 01:06:11.634822 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:06:11.634827 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:06:11.634833 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:06:11.634837 | orchestrator | 2026-03-27 01:06:11.634843 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-27 01:06:11.634851 | orchestrator | Friday 27 March 2026 01:04:48 +0000 (0:00:00.639) 0:01:20.288 ********** 2026-03-27 01:06:11.634856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.634862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.634868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-27 01:06:11.634881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-27 01:06:11.634948 | orchestrator | 2026-03-27 01:06:11.634953 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-27 01:06:11.634959 | orchestrator | Friday 27 March 2026 01:04:52 +0000 (0:00:03.897) 0:01:24.186 ********** 2026-03-27 01:06:11.634964 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:06:11.634970 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:06:11.634975 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:06:11.634980 | orchestrator | 2026-03-27 01:06:11.634986 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-27 01:06:11.634991 | orchestrator | Friday 27 March 2026 01:04:52 +0000 (0:00:00.261) 0:01:24.447 ********** 2026-03-27 01:06:11.634997 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:06:11.635003 | orchestrator | 2026-03-27 01:06:11.635008 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-27 01:06:11.635014 | orchestrator | Friday 27 March 2026 01:04:54 +0000 (0:00:02.064) 0:01:26.512 ********** 2026-03-27 01:06:11.635019 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:06:11.635025 | orchestrator | 2026-03-27 01:06:11.635030 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-27 01:06:11.635035 | orchestrator | Friday 27 March 2026 01:04:56 +0000 (0:00:02.181) 0:01:28.693 ********** 2026-03-27 01:06:11.635041 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:06:11.635046 | orchestrator | 2026-03-27 01:06:11.635052 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-27 01:06:11.635057 | orchestrator | Friday 27 March 2026 01:05:16 +0000 (0:00:19.543) 0:01:48.236 ********** 2026-03-27 01:06:11.635063 | orchestrator | 2026-03-27 01:06:11.635072 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-27 01:06:11.635077 | orchestrator | Friday 27 March 2026 01:05:16 +0000 (0:00:00.061) 0:01:48.298 ********** 2026-03-27 01:06:11.635082 | orchestrator | 2026-03-27 01:06:11.635087 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-27 01:06:11.635092 | orchestrator | Friday 27 March 2026 01:05:16 +0000 (0:00:00.058) 0:01:48.356 ********** 2026-03-27 01:06:11.635098 | orchestrator | 2026-03-27 01:06:11.635103 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-27 01:06:11.635123 | orchestrator | Friday 27 March 2026 01:05:16 +0000 (0:00:00.065) 0:01:48.422 ********** 2026-03-27 01:06:11.635128 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:06:11.635134 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:06:11.635139 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:06:11.635145 | orchestrator | 2026-03-27 01:06:11.635150 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-27 01:06:11.635155 | orchestrator | Friday 27 March 2026 01:05:35 +0000 (0:00:18.985) 0:02:07.408 ********** 2026-03-27 01:06:11.635161 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:06:11.635166 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:06:11.635171 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:06:11.635176 | orchestrator | 2026-03-27 01:06:11.635182 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-27 01:06:11.635187 | orchestrator | Friday 27 March 2026 01:05:41 +0000 (0:00:05.366) 0:02:12.774 ********** 2026-03-27 01:06:11.635192 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:06:11.635198 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:06:11.635203 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:06:11.635208 | orchestrator | 2026-03-27 01:06:11.635213 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-27 01:06:11.635224 | orchestrator | Friday 27 March 2026 01:06:00 +0000 (0:00:19.666) 0:02:32.440 ********** 2026-03-27 01:06:11.635229 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:06:11.635234 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:06:11.635240 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:06:11.635245 | orchestrator | 2026-03-27 01:06:11.635251 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-27 01:06:11.635257 | orchestrator | Friday 27 March 2026 01:06:08 +0000 (0:00:08.153) 0:02:40.594 ********** 2026-03-27 01:06:11.635262 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:06:11.635267 | orchestrator | 2026-03-27 01:06:11.635273 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:06:11.635279 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 01:06:11.635285 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 01:06:11.635291 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 01:06:11.635296 | orchestrator | 2026-03-27 01:06:11.635301 | orchestrator | 2026-03-27 01:06:11.635306 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:06:11.635312 | orchestrator | Friday 27 March 2026 01:06:09 +0000 (0:00:00.251) 0:02:40.845 ********** 2026-03-27 01:06:11.635317 | orchestrator | =============================================================================== 2026-03-27 01:06:11.635322 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 19.67s 2026-03-27 01:06:11.635331 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.54s 2026-03-27 01:06:11.635336 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 18.99s 2026-03-27 01:06:11.635342 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.00s 2026-03-27 01:06:11.635351 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.15s 2026-03-27 01:06:11.635356 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.61s 2026-03-27 01:06:11.635361 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.24s 2026-03-27 01:06:11.635367 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.37s 2026-03-27 01:06:11.635372 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.38s 2026-03-27 01:06:11.635377 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.14s 2026-03-27 01:06:11.635382 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.05s 2026-03-27 01:06:11.635387 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.00s 2026-03-27 01:06:11.635393 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.90s 2026-03-27 01:06:11.635398 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.69s 2026-03-27 01:06:11.635404 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.55s 2026-03-27 01:06:11.635409 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.33s 2026-03-27 01:06:11.635414 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.31s 2026-03-27 01:06:11.635420 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.31s 2026-03-27 01:06:11.635426 | orchestrator | cinder : Copying over existing policy file ------------------------------ 2.80s 2026-03-27 01:06:11.635431 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.66s 2026-03-27 01:06:11.635546 | orchestrator | 2026-03-27 01:06:11 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:11.635555 | orchestrator | 2026-03-27 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:14.691063 | orchestrator | 2026-03-27 01:06:14 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:14.737597 | orchestrator | 2026-03-27 01:06:14 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:14.738396 | orchestrator | 2026-03-27 01:06:14 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:14.738436 | orchestrator | 2026-03-27 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:17.791034 | orchestrator | 2026-03-27 01:06:17 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:17.793320 | orchestrator | 2026-03-27 01:06:17 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:17.795373 | orchestrator | 2026-03-27 01:06:17 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:17.795507 | orchestrator | 2026-03-27 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:20.842416 | orchestrator | 2026-03-27 01:06:20 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:20.844093 | orchestrator | 2026-03-27 01:06:20 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:20.845944 | orchestrator | 2026-03-27 01:06:20 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:20.845996 | orchestrator | 2026-03-27 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:23.886737 | orchestrator | 2026-03-27 01:06:23 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:23.888669 | orchestrator | 2026-03-27 01:06:23 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:23.889310 | orchestrator | 2026-03-27 01:06:23 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:23.889354 | orchestrator | 2026-03-27 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:26.930318 | orchestrator | 2026-03-27 01:06:26 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:26.930378 | orchestrator | 2026-03-27 01:06:26 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:26.930386 | orchestrator | 2026-03-27 01:06:26 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:26.930392 | orchestrator | 2026-03-27 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:29.978405 | orchestrator | 2026-03-27 01:06:29 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:29.981960 | orchestrator | 2026-03-27 01:06:29 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:29.984226 | orchestrator | 2026-03-27 01:06:29 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:29.984276 | orchestrator | 2026-03-27 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:33.034413 | orchestrator | 2026-03-27 01:06:33 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:33.034495 | orchestrator | 2026-03-27 01:06:33 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:33.034570 | orchestrator | 2026-03-27 01:06:33 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:33.034576 | orchestrator | 2026-03-27 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:36.081738 | orchestrator | 2026-03-27 01:06:36 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:36.082857 | orchestrator | 2026-03-27 01:06:36 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:36.083847 | orchestrator | 2026-03-27 01:06:36 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:36.083886 | orchestrator | 2026-03-27 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:39.120572 | orchestrator | 2026-03-27 01:06:39 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:39.120615 | orchestrator | 2026-03-27 01:06:39 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:39.121688 | orchestrator | 2026-03-27 01:06:39 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:39.121709 | orchestrator | 2026-03-27 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:42.169228 | orchestrator | 2026-03-27 01:06:42 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:42.171296 | orchestrator | 2026-03-27 01:06:42 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:42.174793 | orchestrator | 2026-03-27 01:06:42 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:42.174836 | orchestrator | 2026-03-27 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:45.222408 | orchestrator | 2026-03-27 01:06:45 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:45.223898 | orchestrator | 2026-03-27 01:06:45 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:45.225679 | orchestrator | 2026-03-27 01:06:45 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:45.225726 | orchestrator | 2026-03-27 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:48.266542 | orchestrator | 2026-03-27 01:06:48 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:48.269115 | orchestrator | 2026-03-27 01:06:48 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:48.270911 | orchestrator | 2026-03-27 01:06:48 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:48.270953 | orchestrator | 2026-03-27 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:51.319193 | orchestrator | 2026-03-27 01:06:51 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:51.321173 | orchestrator | 2026-03-27 01:06:51 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:51.323240 | orchestrator | 2026-03-27 01:06:51 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:51.323287 | orchestrator | 2026-03-27 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:54.372586 | orchestrator | 2026-03-27 01:06:54 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:54.374299 | orchestrator | 2026-03-27 01:06:54 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:54.377118 | orchestrator | 2026-03-27 01:06:54 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:54.377296 | orchestrator | 2026-03-27 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:06:57.415235 | orchestrator | 2026-03-27 01:06:57 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:06:57.417170 | orchestrator | 2026-03-27 01:06:57 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:06:57.419159 | orchestrator | 2026-03-27 01:06:57 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:06:57.419359 | orchestrator | 2026-03-27 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:00.455884 | orchestrator | 2026-03-27 01:07:00 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:00.457126 | orchestrator | 2026-03-27 01:07:00 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:00.458784 | orchestrator | 2026-03-27 01:07:00 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:00.458854 | orchestrator | 2026-03-27 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:03.505019 | orchestrator | 2026-03-27 01:07:03 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:03.506112 | orchestrator | 2026-03-27 01:07:03 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:03.507477 | orchestrator | 2026-03-27 01:07:03 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:03.507559 | orchestrator | 2026-03-27 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:06.558450 | orchestrator | 2026-03-27 01:07:06 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:06.559185 | orchestrator | 2026-03-27 01:07:06 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:06.560225 | orchestrator | 2026-03-27 01:07:06 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:06.560407 | orchestrator | 2026-03-27 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:09.606759 | orchestrator | 2026-03-27 01:07:09 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:09.607783 | orchestrator | 2026-03-27 01:07:09 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:09.609141 | orchestrator | 2026-03-27 01:07:09 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:09.609178 | orchestrator | 2026-03-27 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:12.654368 | orchestrator | 2026-03-27 01:07:12 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:12.655143 | orchestrator | 2026-03-27 01:07:12 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:12.656334 | orchestrator | 2026-03-27 01:07:12 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:12.657171 | orchestrator | 2026-03-27 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:15.703876 | orchestrator | 2026-03-27 01:07:15 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:15.705580 | orchestrator | 2026-03-27 01:07:15 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:15.709311 | orchestrator | 2026-03-27 01:07:15 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:15.709354 | orchestrator | 2026-03-27 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:18.769557 | orchestrator | 2026-03-27 01:07:18 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:18.769639 | orchestrator | 2026-03-27 01:07:18 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:18.769651 | orchestrator | 2026-03-27 01:07:18 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:18.769711 | orchestrator | 2026-03-27 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:21.813960 | orchestrator | 2026-03-27 01:07:21 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:21.814071 | orchestrator | 2026-03-27 01:07:21 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:21.814656 | orchestrator | 2026-03-27 01:07:21 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:21.816037 | orchestrator | 2026-03-27 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:24.846379 | orchestrator | 2026-03-27 01:07:24 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:24.848789 | orchestrator | 2026-03-27 01:07:24 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:24.852167 | orchestrator | 2026-03-27 01:07:24 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:24.852206 | orchestrator | 2026-03-27 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:27.900247 | orchestrator | 2026-03-27 01:07:27 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:27.902129 | orchestrator | 2026-03-27 01:07:27 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:27.903963 | orchestrator | 2026-03-27 01:07:27 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:27.903992 | orchestrator | 2026-03-27 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:30.941988 | orchestrator | 2026-03-27 01:07:30 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:30.943677 | orchestrator | 2026-03-27 01:07:30 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:30.945218 | orchestrator | 2026-03-27 01:07:30 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:30.945263 | orchestrator | 2026-03-27 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:33.992521 | orchestrator | 2026-03-27 01:07:33 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:33.994080 | orchestrator | 2026-03-27 01:07:33 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:33.996080 | orchestrator | 2026-03-27 01:07:33 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:33.996114 | orchestrator | 2026-03-27 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:37.048539 | orchestrator | 2026-03-27 01:07:37 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:37.052902 | orchestrator | 2026-03-27 01:07:37 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:37.054884 | orchestrator | 2026-03-27 01:07:37 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:37.054998 | orchestrator | 2026-03-27 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:40.109552 | orchestrator | 2026-03-27 01:07:40 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:40.110817 | orchestrator | 2026-03-27 01:07:40 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:40.112229 | orchestrator | 2026-03-27 01:07:40 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:40.112278 | orchestrator | 2026-03-27 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:43.163231 | orchestrator | 2026-03-27 01:07:43 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:43.165945 | orchestrator | 2026-03-27 01:07:43 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:43.168037 | orchestrator | 2026-03-27 01:07:43 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:43.168076 | orchestrator | 2026-03-27 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:46.210964 | orchestrator | 2026-03-27 01:07:46 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:46.213454 | orchestrator | 2026-03-27 01:07:46 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:46.213496 | orchestrator | 2026-03-27 01:07:46 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:46.213501 | orchestrator | 2026-03-27 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:49.253175 | orchestrator | 2026-03-27 01:07:49 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:49.253645 | orchestrator | 2026-03-27 01:07:49 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state STARTED 2026-03-27 01:07:49.255419 | orchestrator | 2026-03-27 01:07:49 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:49.255521 | orchestrator | 2026-03-27 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:52.301881 | orchestrator | 2026-03-27 01:07:52 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state STARTED 2026-03-27 01:07:52.305501 | orchestrator | 2026-03-27 01:07:52 | INFO  | Task cf5e286f-157a-4f5a-a3fd-e1157b2b4543 is in state SUCCESS 2026-03-27 01:07:52.305628 | orchestrator | 2026-03-27 01:07:52.307198 | orchestrator | 2026-03-27 01:07:52.307245 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:07:52.307254 | orchestrator | 2026-03-27 01:07:52.307261 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:07:52.307268 | orchestrator | Friday 27 March 2026 01:06:02 +0000 (0:00:00.282) 0:00:00.282 ********** 2026-03-27 01:07:52.307275 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:07:52.307283 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:07:52.307290 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:07:52.307296 | orchestrator | 2026-03-27 01:07:52.307306 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:07:52.307316 | orchestrator | Friday 27 March 2026 01:06:02 +0000 (0:00:00.270) 0:00:00.553 ********** 2026-03-27 01:07:52.307326 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-27 01:07:52.307439 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-27 01:07:52.307465 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-27 01:07:52.307501 | orchestrator | 2026-03-27 01:07:52.307545 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-27 01:07:52.307566 | orchestrator | 2026-03-27 01:07:52.307586 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-27 01:07:52.307606 | orchestrator | Friday 27 March 2026 01:06:02 +0000 (0:00:00.242) 0:00:00.796 ********** 2026-03-27 01:07:52.307723 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:07:52.307735 | orchestrator | 2026-03-27 01:07:52.307745 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-27 01:07:52.307754 | orchestrator | Friday 27 March 2026 01:06:03 +0000 (0:00:00.553) 0:00:01.349 ********** 2026-03-27 01:07:52.307766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.307778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.307789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.307815 | orchestrator | 2026-03-27 01:07:52.307826 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-27 01:07:52.307835 | orchestrator | Friday 27 March 2026 01:06:04 +0000 (0:00:01.003) 0:00:02.353 ********** 2026-03-27 01:07:52.307844 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-27 01:07:52.307853 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-27 01:07:52.307862 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 01:07:52.307871 | orchestrator | 2026-03-27 01:07:52.307879 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-27 01:07:52.307887 | orchestrator | Friday 27 March 2026 01:06:05 +0000 (0:00:00.788) 0:00:03.141 ********** 2026-03-27 01:07:52.307897 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:07:52.307906 | orchestrator | 2026-03-27 01:07:52.307926 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-27 01:07:52.307946 | orchestrator | Friday 27 March 2026 01:06:05 +0000 (0:00:00.459) 0:00:03.601 ********** 2026-03-27 01:07:52.307972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.307988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.307999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.308011 | orchestrator | 2026-03-27 01:07:52.308039 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-27 01:07:52.308118 | orchestrator | Friday 27 March 2026 01:06:07 +0000 (0:00:01.495) 0:00:05.097 ********** 2026-03-27 01:07:52.308130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-27 01:07:52.308150 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:07:52.308160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-27 01:07:52.308169 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:07:52.308191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-27 01:07:52.308201 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:07:52.308210 | orchestrator | 2026-03-27 01:07:52.308218 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-27 01:07:52.308228 | orchestrator | Friday 27 March 2026 01:06:07 +0000 (0:00:00.387) 0:00:05.484 ********** 2026-03-27 01:07:52.308237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-27 01:07:52.308246 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:07:52.308277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-27 01:07:52.308288 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:07:52.308298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-27 01:07:52.308314 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:07:52.308324 | orchestrator | 2026-03-27 01:07:52.308334 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-27 01:07:52.308344 | orchestrator | Friday 27 March 2026 01:06:07 +0000 (0:00:00.547) 0:00:06.032 ********** 2026-03-27 01:07:52.308354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.308381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.308404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.308414 | orchestrator | 2026-03-27 01:07:52.308423 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-27 01:07:52.308432 | orchestrator | Friday 27 March 2026 01:06:09 +0000 (0:00:01.529) 0:00:07.562 ********** 2026-03-27 01:07:52.308441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.308450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.308472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.308481 | orchestrator | 2026-03-27 01:07:52.308490 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-27 01:07:52.308499 | orchestrator | Friday 27 March 2026 01:06:11 +0000 (0:00:01.580) 0:00:09.143 ********** 2026-03-27 01:07:52.308508 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:07:52.308517 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:07:52.308525 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:07:52.308534 | orchestrator | 2026-03-27 01:07:52.308543 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-27 01:07:52.308552 | orchestrator | Friday 27 March 2026 01:06:11 +0000 (0:00:00.300) 0:00:09.444 ********** 2026-03-27 01:07:52.308560 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-27 01:07:52.308569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-27 01:07:52.308578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-27 01:07:52.308588 | orchestrator | 2026-03-27 01:07:52.308596 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-27 01:07:52.308605 | orchestrator | Friday 27 March 2026 01:06:12 +0000 (0:00:01.337) 0:00:10.782 ********** 2026-03-27 01:07:52.308614 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-27 01:07:52.308623 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-27 01:07:52.308631 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-27 01:07:52.308640 | orchestrator | 2026-03-27 01:07:52.308652 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-27 01:07:52.308661 | orchestrator | Friday 27 March 2026 01:06:13 +0000 (0:00:01.261) 0:00:12.043 ********** 2026-03-27 01:07:52.308673 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 01:07:52.308680 | orchestrator | 2026-03-27 01:07:52.308688 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-27 01:07:52.308696 | orchestrator | Friday 27 March 2026 01:06:15 +0000 (0:00:01.350) 0:00:13.394 ********** 2026-03-27 01:07:52.308704 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-27 01:07:52.308711 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-27 01:07:52.308720 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:07:52.308729 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:07:52.308738 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:07:52.308746 | orchestrator | 2026-03-27 01:07:52.308755 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-27 01:07:52.308763 | orchestrator | Friday 27 March 2026 01:06:15 +0000 (0:00:00.665) 0:00:14.060 ********** 2026-03-27 01:07:52.308771 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:07:52.308779 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:07:52.308787 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:07:52.308795 | orchestrator | 2026-03-27 01:07:52.308811 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-27 01:07:52.308820 | orchestrator | Friday 27 March 2026 01:06:16 +0000 (0:00:00.365) 0:00:14.426 ********** 2026-03-27 01:07:52.308831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1114335, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6525767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1114335, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6525767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1114335, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6525767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1114385, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6596987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1114385, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6596987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1114385, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6596987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1114458, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6669621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1114458, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6669621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1114458, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6669621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1114375, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.656189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1114375, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.656189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1114375, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.656189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1114462, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6681135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1114462, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6681135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1114462, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6681135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.308996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1114352, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6538227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1114352, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6538227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1114352, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6538227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1114419, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.66227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1114419, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.66227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1114419, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.66227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1114443, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6648846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1114443, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6648846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1114443, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6648846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1114331, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6514173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1114331, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6514173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1114331, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6514173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1114346, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6534479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1114346, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6534479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1114346, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6534479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1114380, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.656555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1114380, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.656555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1114380, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.656555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1114429, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6633532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1114429, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6633532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1114429, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6633532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1114456, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6661134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1114456, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6661134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1114456, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6661134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1114365, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.655356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1114365, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.655356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1114365, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.655356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1114436, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6646829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1114436, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6646829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1114436, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6646829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1114472, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.668854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1114472, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.668854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1114472, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.668854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1114423, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6628022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1114423, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6628022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1114423, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6628022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1114412, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6618662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1114412, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6618662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1114412, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6618662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1114407, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6607127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1114407, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6607127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1114407, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6607127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1114431, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1114431, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1114431, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1114403, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6601913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1114403, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6601913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1114403, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6601913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1114448, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.665779, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1114448, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.665779, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1114448, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.665779, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1114357, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6548285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1114357, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6548285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1114357, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6548285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1114945, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.760304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1114945, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.760304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1114945, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.760304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1114520, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6805582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1114520, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6805582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.309995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1114520, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6805582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1114494, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6716368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1114494, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6716368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1114494, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6716368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1114580, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6828294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1114580, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6828294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1114580, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6828294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1114479, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6691134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1114479, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6691134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1114479, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6691134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1114687, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.712498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1114687, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.712498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1114687, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.712498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1114585, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7102191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1114585, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7102191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1114585, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7102191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1114694, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.713675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1114694, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.713675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1114694, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.713675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1114938, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7595847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1114938, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7595847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1114938, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7595847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1114684, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7115843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1114684, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7115843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1114684, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7115843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1114572, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6816094, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1114572, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6816094, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1114572, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6816094, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1114513, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6740081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1114513, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6740081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1114513, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6740081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1114567, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6812203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1114567, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6812203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1114567, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6812203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1114501, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6732547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1114501, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6732547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1114501, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6732547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1114574, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6821136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1114574, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6821136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1114574, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6821136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1114930, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7588644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1114930, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7588644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1114930, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7588644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1114704, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7562883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1114704, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7562883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1114704, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7562883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1114485, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6705194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1114485, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6705194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1114485, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6705194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1114487, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6711004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1114487, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6711004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1114487, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.6711004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1114678, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7112374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1114678, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7112374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1114678, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7112374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1114701, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7143118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1114701, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7143118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1114701, 'dev': 112, 'nlink': 1, 'atime': 1774569745.0, 'mtime': 1774569745.0, 'ctime': 1774570678.7143118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-27 01:07:52.310602 | orchestrator | 2026-03-27 01:07:52.310610 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-27 01:07:52.310627 | orchestrator | Friday 27 March 2026 01:06:50 +0000 (0:00:34.224) 0:00:48.650 ********** 2026-03-27 01:07:52.310635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.310643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.310652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-27 01:07:52.310660 | orchestrator | 2026-03-27 01:07:52.310668 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-27 01:07:52.310676 | orchestrator | Friday 27 March 2026 01:06:51 +0000 (0:00:01.136) 0:00:49.787 ********** 2026-03-27 01:07:52.310685 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:07:52.310694 | orchestrator | 2026-03-27 01:07:52.310702 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-27 01:07:52.310710 | orchestrator | Friday 27 March 2026 01:06:53 +0000 (0:00:01.877) 0:00:51.664 ********** 2026-03-27 01:07:52.310718 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:07:52.310726 | orchestrator | 2026-03-27 01:07:52.310733 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-27 01:07:52.310741 | orchestrator | Friday 27 March 2026 01:06:55 +0000 (0:00:02.095) 0:00:53.760 ********** 2026-03-27 01:07:52.310749 | orchestrator | 2026-03-27 01:07:52.310757 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-27 01:07:52.310765 | orchestrator | Friday 27 March 2026 01:06:55 +0000 (0:00:00.062) 0:00:53.822 ********** 2026-03-27 01:07:52.310773 | orchestrator | 2026-03-27 01:07:52.310781 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-27 01:07:52.310790 | orchestrator | Friday 27 March 2026 01:06:55 +0000 (0:00:00.081) 0:00:53.904 ********** 2026-03-27 01:07:52.310798 | orchestrator | 2026-03-27 01:07:52.310805 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-27 01:07:52.310813 | orchestrator | Friday 27 March 2026 01:06:55 +0000 (0:00:00.075) 0:00:53.980 ********** 2026-03-27 01:07:52.310818 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:07:52.310822 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:07:52.310831 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:07:52.310836 | orchestrator | 2026-03-27 01:07:52.310841 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-27 01:07:52.310853 | orchestrator | Friday 27 March 2026 01:06:57 +0000 (0:00:01.883) 0:00:55.863 ********** 2026-03-27 01:07:52.310861 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:07:52.310869 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:07:52.310878 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-27 01:07:52.310886 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-27 01:07:52.310895 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:07:52.310903 | orchestrator | 2026-03-27 01:07:52.310911 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-27 01:07:52.310919 | orchestrator | Friday 27 March 2026 01:07:24 +0000 (0:00:26.525) 0:01:22.389 ********** 2026-03-27 01:07:52.310928 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:07:52.310935 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:07:52.310944 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:07:52.310951 | orchestrator | 2026-03-27 01:07:52.310960 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-27 01:07:52.310968 | orchestrator | Friday 27 March 2026 01:07:44 +0000 (0:00:20.372) 0:01:42.761 ********** 2026-03-27 01:07:52.310976 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:07:52.310984 | orchestrator | 2026-03-27 01:07:52.310992 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-27 01:07:52.311000 | orchestrator | Friday 27 March 2026 01:07:46 +0000 (0:00:02.047) 0:01:44.809 ********** 2026-03-27 01:07:52.311008 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:07:52.311016 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:07:52.311024 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:07:52.311031 | orchestrator | 2026-03-27 01:07:52.311039 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-27 01:07:52.311047 | orchestrator | Friday 27 March 2026 01:07:47 +0000 (0:00:00.308) 0:01:45.118 ********** 2026-03-27 01:07:52.311056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-27 01:07:52.311065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-27 01:07:52.311073 | orchestrator | 2026-03-27 01:07:52.311081 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-27 01:07:52.311089 | orchestrator | Friday 27 March 2026 01:07:49 +0000 (0:00:02.200) 0:01:47.319 ********** 2026-03-27 01:07:52.311096 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:07:52.311104 | orchestrator | 2026-03-27 01:07:52.311112 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:07:52.311120 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 01:07:52.311129 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 01:07:52.311137 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 01:07:52.311146 | orchestrator | 2026-03-27 01:07:52.311154 | orchestrator | 2026-03-27 01:07:52.311159 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:07:52.311163 | orchestrator | Friday 27 March 2026 01:07:49 +0000 (0:00:00.263) 0:01:47.582 ********** 2026-03-27 01:07:52.311172 | orchestrator | =============================================================================== 2026-03-27 01:07:52.311177 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.22s 2026-03-27 01:07:52.311181 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.53s 2026-03-27 01:07:52.311186 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 20.37s 2026-03-27 01:07:52.311191 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.20s 2026-03-27 01:07:52.311196 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.10s 2026-03-27 01:07:52.311200 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.05s 2026-03-27 01:07:52.311205 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.88s 2026-03-27 01:07:52.311210 | orchestrator | grafana : Creating grafana database ------------------------------------- 1.88s 2026-03-27 01:07:52.311214 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.58s 2026-03-27 01:07:52.311219 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.53s 2026-03-27 01:07:52.311226 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.50s 2026-03-27 01:07:52.311231 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.35s 2026-03-27 01:07:52.311239 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.34s 2026-03-27 01:07:52.311244 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.26s 2026-03-27 01:07:52.311249 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.14s 2026-03-27 01:07:52.311254 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.00s 2026-03-27 01:07:52.311258 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.79s 2026-03-27 01:07:52.311263 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.67s 2026-03-27 01:07:52.311268 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.55s 2026-03-27 01:07:52.311272 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.55s 2026-03-27 01:07:52.311277 | orchestrator | 2026-03-27 01:07:52 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:52.311282 | orchestrator | 2026-03-27 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:55.361197 | orchestrator | 2026-03-27 01:07:55 | INFO  | Task ed73f58e-83a3-42eb-96db-1ebada122371 is in state SUCCESS 2026-03-27 01:07:55.362339 | orchestrator | 2026-03-27 01:07:55 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:55.364588 | orchestrator | 2026-03-27 01:07:55 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:07:55.364733 | orchestrator | 2026-03-27 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:07:58.413399 | orchestrator | 2026-03-27 01:07:58 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:07:58.415985 | orchestrator | 2026-03-27 01:07:58 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:07:58.416028 | orchestrator | 2026-03-27 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:01.453783 | orchestrator | 2026-03-27 01:08:01 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:01.455327 | orchestrator | 2026-03-27 01:08:01 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:01.455350 | orchestrator | 2026-03-27 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:04.499305 | orchestrator | 2026-03-27 01:08:04 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:04.501572 | orchestrator | 2026-03-27 01:08:04 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:04.501633 | orchestrator | 2026-03-27 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:07.531661 | orchestrator | 2026-03-27 01:08:07 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:07.531808 | orchestrator | 2026-03-27 01:08:07 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:07.531902 | orchestrator | 2026-03-27 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:10.581333 | orchestrator | 2026-03-27 01:08:10 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:10.585006 | orchestrator | 2026-03-27 01:08:10 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:10.585272 | orchestrator | 2026-03-27 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:13.634902 | orchestrator | 2026-03-27 01:08:13 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:13.634961 | orchestrator | 2026-03-27 01:08:13 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:13.634969 | orchestrator | 2026-03-27 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:16.679260 | orchestrator | 2026-03-27 01:08:16 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:16.679330 | orchestrator | 2026-03-27 01:08:16 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:16.679416 | orchestrator | 2026-03-27 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:19.725954 | orchestrator | 2026-03-27 01:08:19 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:19.726972 | orchestrator | 2026-03-27 01:08:19 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:19.727018 | orchestrator | 2026-03-27 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:22.785409 | orchestrator | 2026-03-27 01:08:22 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:22.789518 | orchestrator | 2026-03-27 01:08:22 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:22.790096 | orchestrator | 2026-03-27 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:25.851398 | orchestrator | 2026-03-27 01:08:25 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:25.851873 | orchestrator | 2026-03-27 01:08:25 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:25.851932 | orchestrator | 2026-03-27 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:28.932437 | orchestrator | 2026-03-27 01:08:28 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:28.933078 | orchestrator | 2026-03-27 01:08:28 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:28.934145 | orchestrator | 2026-03-27 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:31.985524 | orchestrator | 2026-03-27 01:08:31 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:31.987545 | orchestrator | 2026-03-27 01:08:31 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:31.987599 | orchestrator | 2026-03-27 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:35.032990 | orchestrator | 2026-03-27 01:08:35 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:35.035109 | orchestrator | 2026-03-27 01:08:35 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:35.035558 | orchestrator | 2026-03-27 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:38.191390 | orchestrator | 2026-03-27 01:08:38 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:38.256303 | orchestrator | 2026-03-27 01:08:38 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:38.257477 | orchestrator | 2026-03-27 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:41.259757 | orchestrator | 2026-03-27 01:08:41 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:41.259811 | orchestrator | 2026-03-27 01:08:41 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:41.259819 | orchestrator | 2026-03-27 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:44.294590 | orchestrator | 2026-03-27 01:08:44 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:44.294739 | orchestrator | 2026-03-27 01:08:44 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:44.294754 | orchestrator | 2026-03-27 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:47.345905 | orchestrator | 2026-03-27 01:08:47 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:47.347695 | orchestrator | 2026-03-27 01:08:47 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:47.347777 | orchestrator | 2026-03-27 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:50.387342 | orchestrator | 2026-03-27 01:08:50 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:50.387490 | orchestrator | 2026-03-27 01:08:50 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:50.387547 | orchestrator | 2026-03-27 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:53.419118 | orchestrator | 2026-03-27 01:08:53 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:53.419390 | orchestrator | 2026-03-27 01:08:53 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:53.420248 | orchestrator | 2026-03-27 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:56.467252 | orchestrator | 2026-03-27 01:08:56 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:56.469380 | orchestrator | 2026-03-27 01:08:56 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:56.469591 | orchestrator | 2026-03-27 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:08:59.513335 | orchestrator | 2026-03-27 01:08:59 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:08:59.513858 | orchestrator | 2026-03-27 01:08:59 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:08:59.513877 | orchestrator | 2026-03-27 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:02.550856 | orchestrator | 2026-03-27 01:09:02 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:02.551942 | orchestrator | 2026-03-27 01:09:02 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:02.552039 | orchestrator | 2026-03-27 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:05.592559 | orchestrator | 2026-03-27 01:09:05 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:05.594883 | orchestrator | 2026-03-27 01:09:05 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:05.594948 | orchestrator | 2026-03-27 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:08.626104 | orchestrator | 2026-03-27 01:09:08 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:08.630222 | orchestrator | 2026-03-27 01:09:08 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:08.630332 | orchestrator | 2026-03-27 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:11.666637 | orchestrator | 2026-03-27 01:09:11 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:11.666722 | orchestrator | 2026-03-27 01:09:11 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:11.666733 | orchestrator | 2026-03-27 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:14.704895 | orchestrator | 2026-03-27 01:09:14 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:14.706488 | orchestrator | 2026-03-27 01:09:14 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:14.706651 | orchestrator | 2026-03-27 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:17.753312 | orchestrator | 2026-03-27 01:09:17 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:17.755545 | orchestrator | 2026-03-27 01:09:17 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:17.755848 | orchestrator | 2026-03-27 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:20.807677 | orchestrator | 2026-03-27 01:09:20 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:20.808711 | orchestrator | 2026-03-27 01:09:20 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:20.808745 | orchestrator | 2026-03-27 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:23.848019 | orchestrator | 2026-03-27 01:09:23 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:23.848946 | orchestrator | 2026-03-27 01:09:23 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:23.848989 | orchestrator | 2026-03-27 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:26.888484 | orchestrator | 2026-03-27 01:09:26 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:26.890536 | orchestrator | 2026-03-27 01:09:26 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:26.890601 | orchestrator | 2026-03-27 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:29.937861 | orchestrator | 2026-03-27 01:09:29 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:29.941053 | orchestrator | 2026-03-27 01:09:29 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:29.941130 | orchestrator | 2026-03-27 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:32.987213 | orchestrator | 2026-03-27 01:09:32 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:32.987835 | orchestrator | 2026-03-27 01:09:32 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:32.987864 | orchestrator | 2026-03-27 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:36.040094 | orchestrator | 2026-03-27 01:09:36 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:36.041994 | orchestrator | 2026-03-27 01:09:36 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:36.042112 | orchestrator | 2026-03-27 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:39.096372 | orchestrator | 2026-03-27 01:09:39 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:39.096762 | orchestrator | 2026-03-27 01:09:39 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:39.096843 | orchestrator | 2026-03-27 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:42.135727 | orchestrator | 2026-03-27 01:09:42 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:42.137456 | orchestrator | 2026-03-27 01:09:42 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:42.137504 | orchestrator | 2026-03-27 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:45.190643 | orchestrator | 2026-03-27 01:09:45 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:45.192469 | orchestrator | 2026-03-27 01:09:45 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:45.192522 | orchestrator | 2026-03-27 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:48.239047 | orchestrator | 2026-03-27 01:09:48 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:48.241111 | orchestrator | 2026-03-27 01:09:48 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:48.241189 | orchestrator | 2026-03-27 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:51.288785 | orchestrator | 2026-03-27 01:09:51 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:51.290634 | orchestrator | 2026-03-27 01:09:51 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:51.290681 | orchestrator | 2026-03-27 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:54.336744 | orchestrator | 2026-03-27 01:09:54 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:54.338594 | orchestrator | 2026-03-27 01:09:54 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:54.338637 | orchestrator | 2026-03-27 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:09:57.374730 | orchestrator | 2026-03-27 01:09:57 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:09:57.375501 | orchestrator | 2026-03-27 01:09:57 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:09:57.375536 | orchestrator | 2026-03-27 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:00.411716 | orchestrator | 2026-03-27 01:10:00 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:00.413132 | orchestrator | 2026-03-27 01:10:00 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:00.413187 | orchestrator | 2026-03-27 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:03.459085 | orchestrator | 2026-03-27 01:10:03 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:03.460892 | orchestrator | 2026-03-27 01:10:03 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:03.460937 | orchestrator | 2026-03-27 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:06.503861 | orchestrator | 2026-03-27 01:10:06 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:06.506520 | orchestrator | 2026-03-27 01:10:06 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:06.506570 | orchestrator | 2026-03-27 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:09.554496 | orchestrator | 2026-03-27 01:10:09 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:09.556101 | orchestrator | 2026-03-27 01:10:09 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:09.556148 | orchestrator | 2026-03-27 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:12.604752 | orchestrator | 2026-03-27 01:10:12 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:12.606643 | orchestrator | 2026-03-27 01:10:12 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:12.606682 | orchestrator | 2026-03-27 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:15.653261 | orchestrator | 2026-03-27 01:10:15 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:15.655137 | orchestrator | 2026-03-27 01:10:15 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:15.655200 | orchestrator | 2026-03-27 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:18.698482 | orchestrator | 2026-03-27 01:10:18 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:18.701062 | orchestrator | 2026-03-27 01:10:18 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:18.701243 | orchestrator | 2026-03-27 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:21.746072 | orchestrator | 2026-03-27 01:10:21 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:21.747429 | orchestrator | 2026-03-27 01:10:21 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:21.747571 | orchestrator | 2026-03-27 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:24.794937 | orchestrator | 2026-03-27 01:10:24 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:24.795313 | orchestrator | 2026-03-27 01:10:24 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:24.795434 | orchestrator | 2026-03-27 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:27.842344 | orchestrator | 2026-03-27 01:10:27 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:27.843841 | orchestrator | 2026-03-27 01:10:27 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:27.843880 | orchestrator | 2026-03-27 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:30.874697 | orchestrator | 2026-03-27 01:10:30 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:30.875580 | orchestrator | 2026-03-27 01:10:30 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:30.875652 | orchestrator | 2026-03-27 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:33.918344 | orchestrator | 2026-03-27 01:10:33 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:33.918991 | orchestrator | 2026-03-27 01:10:33 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:33.919027 | orchestrator | 2026-03-27 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:36.952301 | orchestrator | 2026-03-27 01:10:36 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:36.952598 | orchestrator | 2026-03-27 01:10:36 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:36.953220 | orchestrator | 2026-03-27 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:39.991398 | orchestrator | 2026-03-27 01:10:39 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:39.992765 | orchestrator | 2026-03-27 01:10:39 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:39.992796 | orchestrator | 2026-03-27 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:43.045996 | orchestrator | 2026-03-27 01:10:43 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:43.047618 | orchestrator | 2026-03-27 01:10:43 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:43.047664 | orchestrator | 2026-03-27 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:46.099372 | orchestrator | 2026-03-27 01:10:46 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:46.101129 | orchestrator | 2026-03-27 01:10:46 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:46.102713 | orchestrator | 2026-03-27 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:49.137915 | orchestrator | 2026-03-27 01:10:49 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:49.139089 | orchestrator | 2026-03-27 01:10:49 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:49.139123 | orchestrator | 2026-03-27 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:52.177191 | orchestrator | 2026-03-27 01:10:52 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:52.177481 | orchestrator | 2026-03-27 01:10:52 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:52.177589 | orchestrator | 2026-03-27 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:55.237901 | orchestrator | 2026-03-27 01:10:55 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:55.238396 | orchestrator | 2026-03-27 01:10:55 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:55.238616 | orchestrator | 2026-03-27 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:10:58.279743 | orchestrator | 2026-03-27 01:10:58 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:10:58.281052 | orchestrator | 2026-03-27 01:10:58 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:10:58.281145 | orchestrator | 2026-03-27 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:01.325781 | orchestrator | 2026-03-27 01:11:01 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:01.326035 | orchestrator | 2026-03-27 01:11:01 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:01.326046 | orchestrator | 2026-03-27 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:04.371307 | orchestrator | 2026-03-27 01:11:04 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:04.373291 | orchestrator | 2026-03-27 01:11:04 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:04.373359 | orchestrator | 2026-03-27 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:07.415906 | orchestrator | 2026-03-27 01:11:07 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:07.416530 | orchestrator | 2026-03-27 01:11:07 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:07.416568 | orchestrator | 2026-03-27 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:10.469881 | orchestrator | 2026-03-27 01:11:10 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:10.471917 | orchestrator | 2026-03-27 01:11:10 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:10.471963 | orchestrator | 2026-03-27 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:13.510617 | orchestrator | 2026-03-27 01:11:13 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:13.513586 | orchestrator | 2026-03-27 01:11:13 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:13.513636 | orchestrator | 2026-03-27 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:16.537757 | orchestrator | 2026-03-27 01:11:16 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:16.538938 | orchestrator | 2026-03-27 01:11:16 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:16.539010 | orchestrator | 2026-03-27 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:19.568284 | orchestrator | 2026-03-27 01:11:19 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:19.568787 | orchestrator | 2026-03-27 01:11:19 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:19.568811 | orchestrator | 2026-03-27 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:22.602462 | orchestrator | 2026-03-27 01:11:22 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:22.602727 | orchestrator | 2026-03-27 01:11:22 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:22.603246 | orchestrator | 2026-03-27 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:25.647481 | orchestrator | 2026-03-27 01:11:25 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:25.649555 | orchestrator | 2026-03-27 01:11:25 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:25.649697 | orchestrator | 2026-03-27 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:28.691044 | orchestrator | 2026-03-27 01:11:28 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:28.691760 | orchestrator | 2026-03-27 01:11:28 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:28.692148 | orchestrator | 2026-03-27 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:31.733679 | orchestrator | 2026-03-27 01:11:31 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:31.735652 | orchestrator | 2026-03-27 01:11:31 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:31.735708 | orchestrator | 2026-03-27 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:34.782084 | orchestrator | 2026-03-27 01:11:34 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:34.784404 | orchestrator | 2026-03-27 01:11:34 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:34.785126 | orchestrator | 2026-03-27 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:37.836611 | orchestrator | 2026-03-27 01:11:37 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:37.838402 | orchestrator | 2026-03-27 01:11:37 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:37.838490 | orchestrator | 2026-03-27 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:40.896112 | orchestrator | 2026-03-27 01:11:40 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:40.899091 | orchestrator | 2026-03-27 01:11:40 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:40.899231 | orchestrator | 2026-03-27 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:43.951744 | orchestrator | 2026-03-27 01:11:43 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:43.954433 | orchestrator | 2026-03-27 01:11:43 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:43.954503 | orchestrator | 2026-03-27 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:46.993301 | orchestrator | 2026-03-27 01:11:46 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:46.993390 | orchestrator | 2026-03-27 01:11:46 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:46.993400 | orchestrator | 2026-03-27 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:50.040224 | orchestrator | 2026-03-27 01:11:50 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:50.047125 | orchestrator | 2026-03-27 01:11:50 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:50.047179 | orchestrator | 2026-03-27 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:53.096212 | orchestrator | 2026-03-27 01:11:53 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:53.097677 | orchestrator | 2026-03-27 01:11:53 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:53.097723 | orchestrator | 2026-03-27 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:56.133017 | orchestrator | 2026-03-27 01:11:56 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:56.135503 | orchestrator | 2026-03-27 01:11:56 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:56.135549 | orchestrator | 2026-03-27 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:11:59.175992 | orchestrator | 2026-03-27 01:11:59 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state STARTED 2026-03-27 01:11:59.176631 | orchestrator | 2026-03-27 01:11:59 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:11:59.176733 | orchestrator | 2026-03-27 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:12:02.232183 | orchestrator | 2026-03-27 01:12:02.232269 | orchestrator | 2026-03-27 01:12:02.232279 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:12:02.232286 | orchestrator | 2026-03-27 01:12:02.232293 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:12:02.232410 | orchestrator | Friday 27 March 2026 01:05:32 +0000 (0:00:00.191) 0:00:00.191 ********** 2026-03-27 01:12:02.232419 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.232427 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:12:02.232434 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:12:02.232440 | orchestrator | 2026-03-27 01:12:02.232461 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:12:02.232469 | orchestrator | Friday 27 March 2026 01:05:32 +0000 (0:00:00.439) 0:00:00.631 ********** 2026-03-27 01:12:02.232476 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-27 01:12:02.232483 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-27 01:12:02.232489 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-27 01:12:02.232497 | orchestrator | 2026-03-27 01:12:02.232503 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-27 01:12:02.232509 | orchestrator | 2026-03-27 01:12:02.232516 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-27 01:12:02.232522 | orchestrator | Friday 27 March 2026 01:05:33 +0000 (0:00:00.791) 0:00:01.422 ********** 2026-03-27 01:12:02.232528 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.232535 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:12:02.232541 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:12:02.232547 | orchestrator | 2026-03-27 01:12:02.232554 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:12:02.232562 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:12:02.232570 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:12:02.232575 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:12:02.232582 | orchestrator | 2026-03-27 01:12:02.232588 | orchestrator | 2026-03-27 01:12:02.232595 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:12:02.232601 | orchestrator | Friday 27 March 2026 01:07:52 +0000 (0:02:19.458) 0:02:20.881 ********** 2026-03-27 01:12:02.232608 | orchestrator | =============================================================================== 2026-03-27 01:12:02.232614 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 139.46s 2026-03-27 01:12:02.232620 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-03-27 01:12:02.232626 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2026-03-27 01:12:02.232633 | orchestrator | 2026-03-27 01:12:02.232672 | orchestrator | 2026-03-27 01:12:02 | INFO  | Task b221a12f-8cd7-4f30-9fec-6dc9ca3bab67 is in state SUCCESS 2026-03-27 01:12:02.233721 | orchestrator | 2026-03-27 01:12:02.233779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:12:02.233789 | orchestrator | 2026-03-27 01:12:02.233796 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-27 01:12:02.233802 | orchestrator | Friday 27 March 2026 01:03:52 +0000 (0:00:00.326) 0:00:00.326 ********** 2026-03-27 01:12:02.233809 | orchestrator | changed: [testbed-manager] 2026-03-27 01:12:02.233817 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.233824 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:02.233830 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:02.233836 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.233868 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.233875 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.233881 | orchestrator | 2026-03-27 01:12:02.233900 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:12:02.233907 | orchestrator | Friday 27 March 2026 01:03:53 +0000 (0:00:00.653) 0:00:00.979 ********** 2026-03-27 01:12:02.233913 | orchestrator | changed: [testbed-manager] 2026-03-27 01:12:02.233920 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.233926 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:02.233940 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:02.233946 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.233953 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.233959 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.233965 | orchestrator | 2026-03-27 01:12:02.233971 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:12:02.233978 | orchestrator | Friday 27 March 2026 01:03:54 +0000 (0:00:00.798) 0:00:01.778 ********** 2026-03-27 01:12:02.233984 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-27 01:12:02.233991 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-27 01:12:02.233997 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-27 01:12:02.234004 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-27 01:12:02.234010 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-27 01:12:02.234122 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-27 01:12:02.234129 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-27 01:12:02.234137 | orchestrator | 2026-03-27 01:12:02.234143 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-27 01:12:02.234149 | orchestrator | 2026-03-27 01:12:02.234155 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-27 01:12:02.234161 | orchestrator | Friday 27 March 2026 01:03:55 +0000 (0:00:00.907) 0:00:02.685 ********** 2026-03-27 01:12:02.234168 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:02.234222 | orchestrator | 2026-03-27 01:12:02.234230 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-27 01:12:02.234237 | orchestrator | Friday 27 March 2026 01:03:56 +0000 (0:00:00.943) 0:00:03.629 ********** 2026-03-27 01:12:02.234244 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-27 01:12:02.234251 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-27 01:12:02.234258 | orchestrator | 2026-03-27 01:12:02.234264 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-27 01:12:02.234271 | orchestrator | Friday 27 March 2026 01:04:00 +0000 (0:00:04.495) 0:00:08.124 ********** 2026-03-27 01:12:02.234278 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-27 01:12:02.234296 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-27 01:12:02.234303 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.234310 | orchestrator | 2026-03-27 01:12:02.234318 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-27 01:12:02.234325 | orchestrator | Friday 27 March 2026 01:04:04 +0000 (0:00:04.123) 0:00:12.248 ********** 2026-03-27 01:12:02.234331 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.234338 | orchestrator | 2026-03-27 01:12:02.234345 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-27 01:12:02.234352 | orchestrator | Friday 27 March 2026 01:04:05 +0000 (0:00:00.657) 0:00:12.905 ********** 2026-03-27 01:12:02.234360 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.234367 | orchestrator | 2026-03-27 01:12:02.234374 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-27 01:12:02.234381 | orchestrator | Friday 27 March 2026 01:04:07 +0000 (0:00:01.634) 0:00:14.539 ********** 2026-03-27 01:12:02.234388 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.234404 | orchestrator | 2026-03-27 01:12:02.234410 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-27 01:12:02.234417 | orchestrator | Friday 27 March 2026 01:04:14 +0000 (0:00:07.160) 0:00:21.699 ********** 2026-03-27 01:12:02.234424 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.234431 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.234438 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.234445 | orchestrator | 2026-03-27 01:12:02.234452 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-27 01:12:02.234460 | orchestrator | Friday 27 March 2026 01:04:15 +0000 (0:00:00.904) 0:00:22.604 ********** 2026-03-27 01:12:02.234467 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.234474 | orchestrator | 2026-03-27 01:12:02.234482 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-27 01:12:02.234489 | orchestrator | Friday 27 March 2026 01:04:44 +0000 (0:00:29.067) 0:00:51.671 ********** 2026-03-27 01:12:02.234495 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.234516 | orchestrator | 2026-03-27 01:12:02.234523 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-27 01:12:02.234529 | orchestrator | Friday 27 March 2026 01:04:58 +0000 (0:00:14.253) 0:01:05.925 ********** 2026-03-27 01:12:02.234535 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.234541 | orchestrator | 2026-03-27 01:12:02.234548 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-27 01:12:02.234554 | orchestrator | Friday 27 March 2026 01:05:11 +0000 (0:00:12.905) 0:01:18.831 ********** 2026-03-27 01:12:02.234576 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.234583 | orchestrator | 2026-03-27 01:12:02.234589 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-27 01:12:02.234595 | orchestrator | Friday 27 March 2026 01:05:12 +0000 (0:00:00.611) 0:01:19.442 ********** 2026-03-27 01:12:02.234601 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.234607 | orchestrator | 2026-03-27 01:12:02.234614 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-27 01:12:02.234621 | orchestrator | Friday 27 March 2026 01:05:12 +0000 (0:00:00.409) 0:01:19.851 ********** 2026-03-27 01:12:02.234628 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:02.234634 | orchestrator | 2026-03-27 01:12:02.234641 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-27 01:12:02.234647 | orchestrator | Friday 27 March 2026 01:05:13 +0000 (0:00:00.597) 0:01:20.449 ********** 2026-03-27 01:12:02.234653 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.234659 | orchestrator | 2026-03-27 01:12:02.234665 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-27 01:12:02.234672 | orchestrator | Friday 27 March 2026 01:05:30 +0000 (0:00:17.553) 0:01:38.003 ********** 2026-03-27 01:12:02.234678 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.234684 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.234691 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.234697 | orchestrator | 2026-03-27 01:12:02.234704 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-27 01:12:02.234756 | orchestrator | 2026-03-27 01:12:02.234763 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-27 01:12:02.234769 | orchestrator | Friday 27 March 2026 01:05:31 +0000 (0:00:00.566) 0:01:38.569 ********** 2026-03-27 01:12:02.234776 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:02.234783 | orchestrator | 2026-03-27 01:12:02.234789 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-27 01:12:02.234796 | orchestrator | Friday 27 March 2026 01:05:32 +0000 (0:00:01.393) 0:01:39.963 ********** 2026-03-27 01:12:02.234810 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.234817 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.234823 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.234835 | orchestrator | 2026-03-27 01:12:02.234841 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-27 01:12:02.234847 | orchestrator | Friday 27 March 2026 01:05:34 +0000 (0:00:02.139) 0:01:42.102 ********** 2026-03-27 01:12:02.234854 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.234860 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.234866 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.234872 | orchestrator | 2026-03-27 01:12:02.234895 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-27 01:12:02.234903 | orchestrator | Friday 27 March 2026 01:05:36 +0000 (0:00:02.289) 0:01:44.392 ********** 2026-03-27 01:12:02.234910 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.234917 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.234924 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.234931 | orchestrator | 2026-03-27 01:12:02.234955 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-27 01:12:02.234962 | orchestrator | Friday 27 March 2026 01:05:37 +0000 (0:00:00.850) 0:01:45.242 ********** 2026-03-27 01:12:02.234969 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-27 01:12:02.234980 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.234988 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-27 01:12:02.234994 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235001 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-27 01:12:02.235007 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-27 01:12:02.235077 | orchestrator | 2026-03-27 01:12:02.235086 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-27 01:12:02.235093 | orchestrator | Friday 27 March 2026 01:05:44 +0000 (0:00:07.080) 0:01:52.323 ********** 2026-03-27 01:12:02.235098 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.235105 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235112 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235120 | orchestrator | 2026-03-27 01:12:02.235127 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-27 01:12:02.235133 | orchestrator | Friday 27 March 2026 01:05:45 +0000 (0:00:00.377) 0:01:52.700 ********** 2026-03-27 01:12:02.235140 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-27 01:12:02.235147 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.235154 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-27 01:12:02.235161 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235167 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-27 01:12:02.235174 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235181 | orchestrator | 2026-03-27 01:12:02.235187 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-27 01:12:02.235194 | orchestrator | Friday 27 March 2026 01:05:46 +0000 (0:00:01.022) 0:01:53.722 ********** 2026-03-27 01:12:02.235200 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235207 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235213 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.235219 | orchestrator | 2026-03-27 01:12:02.235226 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-27 01:12:02.235232 | orchestrator | Friday 27 March 2026 01:05:46 +0000 (0:00:00.457) 0:01:54.180 ********** 2026-03-27 01:12:02.235238 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235244 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235250 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.235256 | orchestrator | 2026-03-27 01:12:02.235262 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-27 01:12:02.235267 | orchestrator | Friday 27 March 2026 01:05:47 +0000 (0:00:00.965) 0:01:55.145 ********** 2026-03-27 01:12:02.235273 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235279 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235306 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.235312 | orchestrator | 2026-03-27 01:12:02.235318 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-27 01:12:02.235323 | orchestrator | Friday 27 March 2026 01:05:49 +0000 (0:00:02.153) 0:01:57.298 ********** 2026-03-27 01:12:02.235356 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235363 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235369 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.235375 | orchestrator | 2026-03-27 01:12:02.235381 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-27 01:12:02.235388 | orchestrator | Friday 27 March 2026 01:06:13 +0000 (0:00:24.025) 0:02:21.324 ********** 2026-03-27 01:12:02.235395 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235401 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235409 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.235416 | orchestrator | 2026-03-27 01:12:02.235423 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-27 01:12:02.235430 | orchestrator | Friday 27 March 2026 01:06:25 +0000 (0:00:11.567) 0:02:32.892 ********** 2026-03-27 01:12:02.235437 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.235444 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235451 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235458 | orchestrator | 2026-03-27 01:12:02.235465 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-27 01:12:02.235472 | orchestrator | Friday 27 March 2026 01:06:26 +0000 (0:00:00.975) 0:02:33.867 ********** 2026-03-27 01:12:02.235479 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235485 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235492 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.235499 | orchestrator | 2026-03-27 01:12:02.235506 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-27 01:12:02.235513 | orchestrator | Friday 27 March 2026 01:06:38 +0000 (0:00:12.028) 0:02:45.896 ********** 2026-03-27 01:12:02.235519 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.235526 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235533 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235540 | orchestrator | 2026-03-27 01:12:02.235546 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-27 01:12:02.235553 | orchestrator | Friday 27 March 2026 01:06:39 +0000 (0:00:01.153) 0:02:47.049 ********** 2026-03-27 01:12:02.235560 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.235567 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.235574 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.235580 | orchestrator | 2026-03-27 01:12:02.235586 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-27 01:12:02.235592 | orchestrator | 2026-03-27 01:12:02.235599 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-27 01:12:02.235605 | orchestrator | Friday 27 March 2026 01:06:39 +0000 (0:00:00.291) 0:02:47.341 ********** 2026-03-27 01:12:02.235611 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:02.235618 | orchestrator | 2026-03-27 01:12:02.235624 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-27 01:12:02.235630 | orchestrator | Friday 27 March 2026 01:06:40 +0000 (0:00:00.801) 0:02:48.142 ********** 2026-03-27 01:12:02.235636 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-27 01:12:02.235642 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-27 01:12:02.235649 | orchestrator | 2026-03-27 01:12:02.235662 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-27 01:12:02.235668 | orchestrator | Friday 27 March 2026 01:06:43 +0000 (0:00:02.681) 0:02:50.824 ********** 2026-03-27 01:12:02.235675 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-27 01:12:02.235700 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-27 01:12:02.235706 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-27 01:12:02.235713 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-27 01:12:02.235720 | orchestrator | 2026-03-27 01:12:02.235726 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-27 01:12:02.235731 | orchestrator | Friday 27 March 2026 01:06:49 +0000 (0:00:05.994) 0:02:56.819 ********** 2026-03-27 01:12:02.235739 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-27 01:12:02.235748 | orchestrator | 2026-03-27 01:12:02.235755 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-27 01:12:02.235760 | orchestrator | Friday 27 March 2026 01:06:52 +0000 (0:00:02.771) 0:02:59.591 ********** 2026-03-27 01:12:02.235766 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-27 01:12:02.235772 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:12:02.235777 | orchestrator | 2026-03-27 01:12:02.235882 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-27 01:12:02.235897 | orchestrator | Friday 27 March 2026 01:06:55 +0000 (0:00:03.179) 0:03:02.770 ********** 2026-03-27 01:12:02.235903 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:12:02.235909 | orchestrator | 2026-03-27 01:12:02.235915 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-27 01:12:02.235920 | orchestrator | Friday 27 March 2026 01:06:58 +0000 (0:00:03.231) 0:03:06.002 ********** 2026-03-27 01:12:02.235926 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-27 01:12:02.235933 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-27 01:12:02.235938 | orchestrator | 2026-03-27 01:12:02.235944 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-27 01:12:02.235960 | orchestrator | Friday 27 March 2026 01:07:06 +0000 (0:00:07.419) 0:03:13.421 ********** 2026-03-27 01:12:02.235973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.235985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.236006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.236019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.236026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.236033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.236060 | orchestrator | 2026-03-27 01:12:02.236066 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-27 01:12:02.236073 | orchestrator | Friday 27 March 2026 01:07:07 +0000 (0:00:01.693) 0:03:15.115 ********** 2026-03-27 01:12:02.236079 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.236092 | orchestrator | 2026-03-27 01:12:02.236098 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-27 01:12:02.236105 | orchestrator | Friday 27 March 2026 01:07:07 +0000 (0:00:00.135) 0:03:15.250 ********** 2026-03-27 01:12:02.236111 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.236207 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.236214 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.236222 | orchestrator | 2026-03-27 01:12:02.236227 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-27 01:12:02.236233 | orchestrator | Friday 27 March 2026 01:07:08 +0000 (0:00:00.309) 0:03:15.560 ********** 2026-03-27 01:12:02.236239 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-27 01:12:02.236245 | orchestrator | 2026-03-27 01:12:02.236252 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-27 01:12:02.236259 | orchestrator | Friday 27 March 2026 01:07:08 +0000 (0:00:00.745) 0:03:16.305 ********** 2026-03-27 01:12:02.236264 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.236270 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.236277 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.236289 | orchestrator | 2026-03-27 01:12:02.236296 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-27 01:12:02.236302 | orchestrator | Friday 27 March 2026 01:07:09 +0000 (0:00:00.291) 0:03:16.597 ********** 2026-03-27 01:12:02.236309 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:02.236316 | orchestrator | 2026-03-27 01:12:02.236322 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-27 01:12:02.236330 | orchestrator | Friday 27 March 2026 01:07:09 +0000 (0:00:00.699) 0:03:17.297 ********** 2026-03-27 01:12:02.236335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.236348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.236367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.236384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.236390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.236397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.236401 | orchestrator | 2026-03-27 01:12:02.236405 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-27 01:12:02.236409 | orchestrator | Friday 27 March 2026 01:07:11 +0000 (0:00:02.056) 0:03:19.354 ********** 2026-03-27 01:12:02.236414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 01:12:02.236421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.236425 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.236433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 01:12:02.236437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.236441 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.236451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 01:12:02.236462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.236469 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.236476 | orchestrator | 2026-03-27 01:12:02.236482 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-27 01:12:02.236489 | orchestrator | Friday 27 March 2026 01:07:12 +0000 (0:00:00.651) 0:03:20.006 ********** 2026-03-27 01:12:02.236499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 01:12:02.236507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.236512 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.237296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 01:12:02.237377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.237389 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.237403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 01:12:02.237410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.237416 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.237422 | orchestrator | 2026-03-27 01:12:02.237429 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-27 01:12:02.237435 | orchestrator | Friday 27 March 2026 01:07:13 +0000 (0:00:00.992) 0:03:20.999 ********** 2026-03-27 01:12:02.237448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.237457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.237465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.237469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.237478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.237485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.237489 | orchestrator | 2026-03-27 01:12:02.237493 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-27 01:12:02.237497 | orchestrator | Friday 27 March 2026 01:07:15 +0000 (0:00:02.191) 0:03:23.190 ********** 2026-03-27 01:12:02.237501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.237509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.237518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.237525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.237531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.237537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.237543 | orchestrator | 2026-03-27 01:12:02.237549 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-27 01:12:02.237556 | orchestrator | Friday 27 March 2026 01:07:21 +0000 (0:00:05.719) 0:03:28.910 ********** 2026-03-27 01:12:02.237565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 01:12:02.237580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.237587 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.237593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 01:12:02.237600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.237606 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.237615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-27 01:12:02.237622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.237638 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.237644 | orchestrator | 2026-03-27 01:12:02.237650 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-27 01:12:02.237656 | orchestrator | Friday 27 March 2026 01:07:22 +0000 (0:00:00.490) 0:03:29.400 ********** 2026-03-27 01:12:02.237662 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.237681 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:02.237687 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:02.237694 | orchestrator | 2026-03-27 01:12:02.237750 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-27 01:12:02.237758 | orchestrator | Friday 27 March 2026 01:07:23 +0000 (0:00:01.529) 0:03:30.929 ********** 2026-03-27 01:12:02.237763 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.237769 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.237782 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.237787 | orchestrator | 2026-03-27 01:12:02.237793 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-27 01:12:02.237798 | orchestrator | Friday 27 March 2026 01:07:23 +0000 (0:00:00.270) 0:03:31.199 ********** 2026-03-27 01:12:02.237805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.237816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.237836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:02.237843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.237851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.237857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.237862 | orchestrator | 2026-03-27 01:12:02.237868 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-27 01:12:02.237873 | orchestrator | Friday 27 March 2026 01:07:25 +0000 (0:00:01.752) 0:03:32.952 ********** 2026-03-27 01:12:02.237879 | orchestrator | 2026-03-27 01:12:02.237884 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-27 01:12:02.237890 | orchestrator | Friday 27 March 2026 01:07:25 +0000 (0:00:00.124) 0:03:33.077 ********** 2026-03-27 01:12:02.237896 | orchestrator | 2026-03-27 01:12:02.237902 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-27 01:12:02.237907 | orchestrator | Friday 27 March 2026 01:07:25 +0000 (0:00:00.136) 0:03:33.213 ********** 2026-03-27 01:12:02.237913 | orchestrator | 2026-03-27 01:12:02.237918 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-27 01:12:02.237928 | orchestrator | Friday 27 March 2026 01:07:26 +0000 (0:00:00.412) 0:03:33.625 ********** 2026-03-27 01:12:02.237938 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.237943 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:02.237949 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:02.237955 | orchestrator | 2026-03-27 01:12:02.237961 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-27 01:12:02.237967 | orchestrator | Friday 27 March 2026 01:07:45 +0000 (0:00:19.221) 0:03:52.847 ********** 2026-03-27 01:12:02.237973 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.237979 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:02.237984 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:02.237991 | orchestrator | 2026-03-27 01:12:02.237998 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-27 01:12:02.238006 | orchestrator | 2026-03-27 01:12:02.238068 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-27 01:12:02.238074 | orchestrator | Friday 27 March 2026 01:07:50 +0000 (0:00:05.137) 0:03:57.984 ********** 2026-03-27 01:12:02.238081 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:02.238098 | orchestrator | 2026-03-27 01:12:02.238104 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-27 01:12:02.238110 | orchestrator | Friday 27 March 2026 01:07:51 +0000 (0:00:01.218) 0:03:59.203 ********** 2026-03-27 01:12:02.238116 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.238123 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.238129 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.238134 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.238140 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.238146 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.238151 | orchestrator | 2026-03-27 01:12:02.238157 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-27 01:12:02.238182 | orchestrator | Friday 27 March 2026 01:07:52 +0000 (0:00:00.784) 0:03:59.987 ********** 2026-03-27 01:12:02.238187 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.238193 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.238198 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.238203 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 01:12:02.238209 | orchestrator | 2026-03-27 01:12:02.238215 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-27 01:12:02.238226 | orchestrator | Friday 27 March 2026 01:07:53 +0000 (0:00:01.104) 0:04:01.092 ********** 2026-03-27 01:12:02.238232 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-27 01:12:02.238238 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-27 01:12:02.238243 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-27 01:12:02.238248 | orchestrator | 2026-03-27 01:12:02.238264 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-27 01:12:02.238269 | orchestrator | Friday 27 March 2026 01:07:54 +0000 (0:00:00.936) 0:04:02.028 ********** 2026-03-27 01:12:02.238282 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-27 01:12:02.238289 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-27 01:12:02.238294 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-27 01:12:02.238300 | orchestrator | 2026-03-27 01:12:02.238306 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-27 01:12:02.238312 | orchestrator | Friday 27 March 2026 01:07:55 +0000 (0:00:01.183) 0:04:03.212 ********** 2026-03-27 01:12:02.238318 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-27 01:12:02.238324 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.238330 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-27 01:12:02.238336 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.238349 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-27 01:12:02.238354 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.238360 | orchestrator | 2026-03-27 01:12:02.238366 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-27 01:12:02.238372 | orchestrator | Friday 27 March 2026 01:07:56 +0000 (0:00:00.834) 0:04:04.046 ********** 2026-03-27 01:12:02.238377 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-27 01:12:02.238383 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-27 01:12:02.238388 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.238394 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-27 01:12:02.238399 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-27 01:12:02.238404 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-27 01:12:02.238412 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-27 01:12:02.238418 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.238424 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-27 01:12:02.238430 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-27 01:12:02.238436 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.238442 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-27 01:12:02.238447 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-27 01:12:02.238453 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-27 01:12:02.238459 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-27 01:12:02.238464 | orchestrator | 2026-03-27 01:12:02.238470 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-27 01:12:02.238481 | orchestrator | Friday 27 March 2026 01:07:58 +0000 (0:00:01.954) 0:04:06.001 ********** 2026-03-27 01:12:02.238488 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.238493 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.238498 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.238504 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.238509 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.238515 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.238521 | orchestrator | 2026-03-27 01:12:02.238526 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-27 01:12:02.238532 | orchestrator | Friday 27 March 2026 01:07:59 +0000 (0:00:01.022) 0:04:07.023 ********** 2026-03-27 01:12:02.238538 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.238544 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.238550 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.238556 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.238561 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.238567 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.238572 | orchestrator | 2026-03-27 01:12:02.238578 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-27 01:12:02.238583 | orchestrator | Friday 27 March 2026 01:08:01 +0000 (0:00:01.894) 0:04:08.917 ********** 2026-03-27 01:12:02.238591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238734 | orchestrator | 2026-03-27 01:12:02.238740 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-27 01:12:02.238745 | orchestrator | Friday 27 March 2026 01:08:03 +0000 (0:00:02.325) 0:04:11.243 ********** 2026-03-27 01:12:02.238766 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:02.238774 | orchestrator | 2026-03-27 01:12:02.238780 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-27 01:12:02.238785 | orchestrator | Friday 27 March 2026 01:08:05 +0000 (0:00:01.244) 0:04:12.487 ********** 2026-03-27 01:12:02.238795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238856 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.238934 | orchestrator | 2026-03-27 01:12:02.238940 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-27 01:12:02.238946 | orchestrator | Friday 27 March 2026 01:08:08 +0000 (0:00:03.548) 0:04:16.036 ********** 2026-03-27 01:12:02.238957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.238965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.238971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.238977 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.238987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.239002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.239013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239033 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.239052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.239058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.239064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239079 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.239085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-27 01:12:02.239091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239097 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.239107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-27 01:12:02.239113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239118 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.239124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-27 01:12:02.239130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239136 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.239142 | orchestrator | 2026-03-27 01:12:02.239149 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-27 01:12:02.239159 | orchestrator | Friday 27 March 2026 01:08:10 +0000 (0:00:01.577) 0:04:17.613 ********** 2026-03-27 01:12:02.239169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.239176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.239440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239460 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.239467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.239475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.239488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239502 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.239508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.239513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.239530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239539 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.239545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-27 01:12:02.239551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-27 01:12:02.239579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239586 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.239591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239597 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.239603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-27 01:12:02.239613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.239619 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.239624 | orchestrator | 2026-03-27 01:12:02.239630 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-27 01:12:02.239637 | orchestrator | Friday 27 March 2026 01:08:12 +0000 (0:00:02.100) 0:04:19.714 ********** 2026-03-27 01:12:02.239643 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.239649 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.239655 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.239661 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-27 01:12:02.239666 | orchestrator | 2026-03-27 01:12:02.239673 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-27 01:12:02.239679 | orchestrator | Friday 27 March 2026 01:08:13 +0000 (0:00:01.052) 0:04:20.767 ********** 2026-03-27 01:12:02.239686 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-27 01:12:02.239692 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-27 01:12:02.239698 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-27 01:12:02.239709 | orchestrator | 2026-03-27 01:12:02.239715 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-27 01:12:02.239721 | orchestrator | Friday 27 March 2026 01:08:14 +0000 (0:00:01.003) 0:04:21.770 ********** 2026-03-27 01:12:02.239727 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-27 01:12:02.239733 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-27 01:12:02.239738 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-27 01:12:02.239745 | orchestrator | 2026-03-27 01:12:02.239753 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-27 01:12:02.239758 | orchestrator | Friday 27 March 2026 01:08:15 +0000 (0:00:01.243) 0:04:23.015 ********** 2026-03-27 01:12:02.239767 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:12:02.239775 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:12:02.239781 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:12:02.239786 | orchestrator | 2026-03-27 01:12:02.239792 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-27 01:12:02.239798 | orchestrator | Friday 27 March 2026 01:08:16 +0000 (0:00:00.532) 0:04:23.547 ********** 2026-03-27 01:12:02.239803 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:12:02.239809 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:12:02.239814 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:12:02.239820 | orchestrator | 2026-03-27 01:12:02.239825 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-27 01:12:02.239831 | orchestrator | Friday 27 March 2026 01:08:16 +0000 (0:00:00.487) 0:04:24.034 ********** 2026-03-27 01:12:02.239837 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-27 01:12:02.239844 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-27 01:12:02.239850 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-27 01:12:02.239856 | orchestrator | 2026-03-27 01:12:02.239861 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-27 01:12:02.239867 | orchestrator | Friday 27 March 2026 01:08:17 +0000 (0:00:01.187) 0:04:25.222 ********** 2026-03-27 01:12:02.239873 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-27 01:12:02.239888 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-27 01:12:02.239894 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-27 01:12:02.239900 | orchestrator | 2026-03-27 01:12:02.239905 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-27 01:12:02.239911 | orchestrator | Friday 27 March 2026 01:08:19 +0000 (0:00:01.341) 0:04:26.564 ********** 2026-03-27 01:12:02.239918 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-27 01:12:02.239924 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-27 01:12:02.239931 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-27 01:12:02.239938 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-27 01:12:02.239944 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-27 01:12:02.239951 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-27 01:12:02.239957 | orchestrator | 2026-03-27 01:12:02.239963 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-27 01:12:02.239968 | orchestrator | Friday 27 March 2026 01:08:22 +0000 (0:00:03.539) 0:04:30.104 ********** 2026-03-27 01:12:02.239974 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.239980 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.239987 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.239993 | orchestrator | 2026-03-27 01:12:02.239998 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-27 01:12:02.240004 | orchestrator | Friday 27 March 2026 01:08:23 +0000 (0:00:00.362) 0:04:30.466 ********** 2026-03-27 01:12:02.240010 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.240015 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.240021 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.240077 | orchestrator | 2026-03-27 01:12:02.240085 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-27 01:12:02.240091 | orchestrator | Friday 27 March 2026 01:08:23 +0000 (0:00:00.339) 0:04:30.806 ********** 2026-03-27 01:12:02.240097 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.240104 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.240116 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.240126 | orchestrator | 2026-03-27 01:12:02.240132 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-27 01:12:02.240137 | orchestrator | Friday 27 March 2026 01:08:24 +0000 (0:00:01.467) 0:04:32.273 ********** 2026-03-27 01:12:02.240151 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-27 01:12:02.240159 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-27 01:12:02.240165 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-27 01:12:02.240171 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-27 01:12:02.240178 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-27 01:12:02.240184 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-27 01:12:02.240191 | orchestrator | 2026-03-27 01:12:02.240198 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-27 01:12:02.240204 | orchestrator | Friday 27 March 2026 01:08:28 +0000 (0:00:03.396) 0:04:35.669 ********** 2026-03-27 01:12:02.240211 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-27 01:12:02.240216 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-27 01:12:02.240222 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-27 01:12:02.240228 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-27 01:12:02.240233 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.240239 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-27 01:12:02.240245 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.240251 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-27 01:12:02.240257 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.240263 | orchestrator | 2026-03-27 01:12:02.240270 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-27 01:12:02.240280 | orchestrator | Friday 27 March 2026 01:08:31 +0000 (0:00:03.305) 0:04:38.975 ********** 2026-03-27 01:12:02.240290 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.240302 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.240310 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.240317 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-27 01:12:02.240325 | orchestrator | 2026-03-27 01:12:02.240331 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-27 01:12:02.240337 | orchestrator | Friday 27 March 2026 01:08:33 +0000 (0:00:01.922) 0:04:40.898 ********** 2026-03-27 01:12:02.240343 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-27 01:12:02.240349 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-27 01:12:02.240355 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-27 01:12:02.240362 | orchestrator | 2026-03-27 01:12:02.240367 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-27 01:12:02.240373 | orchestrator | Friday 27 March 2026 01:08:34 +0000 (0:00:00.974) 0:04:41.872 ********** 2026-03-27 01:12:02.240379 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.240385 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.240398 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.240404 | orchestrator | 2026-03-27 01:12:02.240414 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-27 01:12:02.240419 | orchestrator | Friday 27 March 2026 01:08:34 +0000 (0:00:00.288) 0:04:42.161 ********** 2026-03-27 01:12:02.240425 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.240430 | orchestrator | 2026-03-27 01:12:02.240436 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-27 01:12:02.240441 | orchestrator | Friday 27 March 2026 01:08:34 +0000 (0:00:00.130) 0:04:42.292 ********** 2026-03-27 01:12:02.240446 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.240452 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.240457 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.240463 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.240468 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.240474 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.240480 | orchestrator | 2026-03-27 01:12:02.240486 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-27 01:12:02.240492 | orchestrator | Friday 27 March 2026 01:08:35 +0000 (0:00:00.767) 0:04:43.059 ********** 2026-03-27 01:12:02.240498 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-27 01:12:02.240517 | orchestrator | 2026-03-27 01:12:02.240524 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-27 01:12:02.240530 | orchestrator | Friday 27 March 2026 01:08:36 +0000 (0:00:00.772) 0:04:43.831 ********** 2026-03-27 01:12:02.240536 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.240542 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.240547 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.240553 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.240559 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.240564 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.240571 | orchestrator | 2026-03-27 01:12:02.240576 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-27 01:12:02.240582 | orchestrator | Friday 27 March 2026 01:08:37 +0000 (0:00:00.566) 0:04:44.397 ********** 2026-03-27 01:12:02.240598 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240695 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240776 | orchestrator | 2026-03-27 01:12:02.240782 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-27 01:12:02.240788 | orchestrator | Friday 27 March 2026 01:08:40 +0000 (0:00:03.916) 0:04:48.313 ********** 2026-03-27 01:12:02.240798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.240804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.240810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.240823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.240832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.240844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.240854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.240926 | orchestrator | 2026-03-27 01:12:02.240932 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-27 01:12:02.240938 | orchestrator | Friday 27 March 2026 01:08:47 +0000 (0:00:06.717) 0:04:55.031 ********** 2026-03-27 01:12:02.240944 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.240951 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.240958 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.240965 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.240975 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.240982 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.240988 | orchestrator | 2026-03-27 01:12:02.240995 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-27 01:12:02.241022 | orchestrator | Friday 27 March 2026 01:08:49 +0000 (0:00:01.607) 0:04:56.639 ********** 2026-03-27 01:12:02.241029 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-27 01:12:02.241056 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-27 01:12:02.241063 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-27 01:12:02.241068 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-27 01:12:02.241074 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-27 01:12:02.241080 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-27 01:12:02.241085 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-27 01:12:02.241091 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.241097 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-27 01:12:02.241102 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.241108 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-27 01:12:02.241114 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.241120 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-27 01:12:02.241127 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-27 01:12:02.241134 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-27 01:12:02.241143 | orchestrator | 2026-03-27 01:12:02.241151 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-27 01:12:02.241166 | orchestrator | Friday 27 March 2026 01:08:52 +0000 (0:00:03.272) 0:04:59.912 ********** 2026-03-27 01:12:02.241172 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.241178 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.241183 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.241189 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.241194 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.241200 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.241206 | orchestrator | 2026-03-27 01:12:02.241212 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-27 01:12:02.241218 | orchestrator | Friday 27 March 2026 01:08:53 +0000 (0:00:00.652) 0:05:00.565 ********** 2026-03-27 01:12:02.241223 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-27 01:12:02.241233 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-27 01:12:02.241244 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-27 01:12:02.241251 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-27 01:12:02.241257 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-27 01:12:02.241262 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-27 01:12:02.241268 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-27 01:12:02.241274 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-27 01:12:02.241280 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-27 01:12:02.241353 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-27 01:12:02.241364 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-27 01:12:02.241369 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.241375 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-27 01:12:02.241380 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.241386 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-27 01:12:02.241391 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.241396 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-27 01:12:02.241413 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-27 01:12:02.241420 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-27 01:12:02.241435 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-27 01:12:02.241441 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-27 01:12:02.241447 | orchestrator | 2026-03-27 01:12:02.241453 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-27 01:12:02.241459 | orchestrator | Friday 27 March 2026 01:08:58 +0000 (0:00:05.107) 0:05:05.672 ********** 2026-03-27 01:12:02.241465 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-27 01:12:02.241471 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-27 01:12:02.241477 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-27 01:12:02.241483 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-27 01:12:02.241488 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-27 01:12:02.241494 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-27 01:12:02.241500 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-27 01:12:02.241505 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-27 01:12:02.241510 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-27 01:12:02.241515 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-27 01:12:02.241521 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-27 01:12:02.241527 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-27 01:12:02.241533 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-27 01:12:02.241538 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.241544 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-27 01:12:02.241550 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.241555 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-27 01:12:02.241561 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.241567 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-27 01:12:02.241572 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-27 01:12:02.241578 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-27 01:12:02.241592 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-27 01:12:02.241599 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-27 01:12:02.241610 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-27 01:12:02.241615 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-27 01:12:02.241620 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-27 01:12:02.241625 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-27 01:12:02.241631 | orchestrator | 2026-03-27 01:12:02.241647 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-27 01:12:02.241653 | orchestrator | Friday 27 March 2026 01:09:05 +0000 (0:00:07.520) 0:05:13.193 ********** 2026-03-27 01:12:02.241659 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.241665 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.241671 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.241676 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.241682 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.241688 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.241693 | orchestrator | 2026-03-27 01:12:02.241699 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-27 01:12:02.241708 | orchestrator | Friday 27 March 2026 01:09:06 +0000 (0:00:00.509) 0:05:13.702 ********** 2026-03-27 01:12:02.241716 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.241721 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.241727 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.241732 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.241738 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.241743 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.241749 | orchestrator | 2026-03-27 01:12:02.241754 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-27 01:12:02.241760 | orchestrator | Friday 27 March 2026 01:09:06 +0000 (0:00:00.643) 0:05:14.346 ********** 2026-03-27 01:12:02.241766 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.241772 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.241778 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.241784 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.241789 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.241795 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.241800 | orchestrator | 2026-03-27 01:12:02.241806 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-03-27 01:12:02.241811 | orchestrator | Friday 27 March 2026 01:09:08 +0000 (0:00:01.849) 0:05:16.195 ********** 2026-03-27 01:12:02.241817 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.241829 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.241835 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.241841 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.241847 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.241853 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.241859 | orchestrator | 2026-03-27 01:12:02.241865 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-27 01:12:02.241871 | orchestrator | Friday 27 March 2026 01:09:11 +0000 (0:00:02.292) 0:05:18.487 ********** 2026-03-27 01:12:02.241878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.241896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.241909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.241916 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.241923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.241928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.241937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.241946 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.241950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-27 01:12:02.241956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-27 01:12:02.241967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.241973 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.241980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-27 01:12:02.241991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.241998 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.242004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-27 01:12:02.242104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.242115 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.242125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-27 01:12:02.242138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-27 01:12:02.242145 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.242151 | orchestrator | 2026-03-27 01:12:02.242157 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-27 01:12:02.242163 | orchestrator | Friday 27 March 2026 01:09:12 +0000 (0:00:01.345) 0:05:19.833 ********** 2026-03-27 01:12:02.242169 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-27 01:12:02.242175 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-27 01:12:02.242192 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.242198 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-27 01:12:02.242204 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-27 01:12:02.242210 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.242216 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-27 01:12:02.242220 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-27 01:12:02.242223 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.242227 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-27 01:12:02.242231 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-27 01:12:02.242235 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.242238 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-27 01:12:02.242242 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-27 01:12:02.242246 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.242249 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-27 01:12:02.242253 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-27 01:12:02.242262 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.242266 | orchestrator | 2026-03-27 01:12:02.242273 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-27 01:12:02.242278 | orchestrator | Friday 27 March 2026 01:09:13 +0000 (0:00:00.914) 0:05:20.748 ********** 2026-03-27 01:12:02.242293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242342 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:02.242678 | orchestrator | 2026-03-27 01:12:02.242685 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-27 01:12:02.242693 | orchestrator | Friday 27 March 2026 01:09:15 +0000 (0:00:02.606) 0:05:23.355 ********** 2026-03-27 01:12:02.242699 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.242706 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.242713 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.242720 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.242728 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.242735 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.242741 | orchestrator | 2026-03-27 01:12:02.242747 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-27 01:12:02.242768 | orchestrator | Friday 27 March 2026 01:09:16 +0000 (0:00:00.768) 0:05:24.123 ********** 2026-03-27 01:12:02.242776 | orchestrator | 2026-03-27 01:12:02.242783 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-27 01:12:02.242790 | orchestrator | Friday 27 March 2026 01:09:16 +0000 (0:00:00.130) 0:05:24.253 ********** 2026-03-27 01:12:02.242794 | orchestrator | 2026-03-27 01:12:02.242799 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-27 01:12:02.242867 | orchestrator | Friday 27 March 2026 01:09:16 +0000 (0:00:00.128) 0:05:24.381 ********** 2026-03-27 01:12:02.242879 | orchestrator | 2026-03-27 01:12:02.242888 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-27 01:12:02.242899 | orchestrator | Friday 27 March 2026 01:09:17 +0000 (0:00:00.129) 0:05:24.511 ********** 2026-03-27 01:12:02.242913 | orchestrator | 2026-03-27 01:12:02.242921 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-27 01:12:02.242927 | orchestrator | Friday 27 March 2026 01:09:17 +0000 (0:00:00.128) 0:05:24.639 ********** 2026-03-27 01:12:02.242934 | orchestrator | 2026-03-27 01:12:02.242942 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-27 01:12:02.242950 | orchestrator | Friday 27 March 2026 01:09:17 +0000 (0:00:00.287) 0:05:24.926 ********** 2026-03-27 01:12:02.242958 | orchestrator | 2026-03-27 01:12:02.242966 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-27 01:12:02.242973 | orchestrator | Friday 27 March 2026 01:09:17 +0000 (0:00:00.129) 0:05:25.056 ********** 2026-03-27 01:12:02.242982 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.242989 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:02.242998 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:02.243006 | orchestrator | 2026-03-27 01:12:02.243012 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-27 01:12:02.243021 | orchestrator | Friday 27 March 2026 01:09:24 +0000 (0:00:07.045) 0:05:32.101 ********** 2026-03-27 01:12:02.243028 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.243084 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:02.243097 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:02.243104 | orchestrator | 2026-03-27 01:12:02.243111 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-27 01:12:02.243118 | orchestrator | Friday 27 March 2026 01:09:36 +0000 (0:00:11.420) 0:05:43.522 ********** 2026-03-27 01:12:02.243125 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.243133 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.243139 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.243146 | orchestrator | 2026-03-27 01:12:02.243172 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-27 01:12:02.243183 | orchestrator | Friday 27 March 2026 01:09:56 +0000 (0:00:20.553) 0:06:04.076 ********** 2026-03-27 01:12:02.243190 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.243197 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.243204 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.243208 | orchestrator | 2026-03-27 01:12:02.243212 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-27 01:12:02.243217 | orchestrator | Friday 27 March 2026 01:10:27 +0000 (0:00:31.218) 0:06:35.294 ********** 2026-03-27 01:12:02.243222 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.243226 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.243229 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.243234 | orchestrator | 2026-03-27 01:12:02.243238 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-27 01:12:02.243242 | orchestrator | Friday 27 March 2026 01:10:28 +0000 (0:00:00.838) 0:06:36.133 ********** 2026-03-27 01:12:02.243247 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.243252 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.243256 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.243259 | orchestrator | 2026-03-27 01:12:02.243264 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-27 01:12:02.243268 | orchestrator | Friday 27 March 2026 01:10:29 +0000 (0:00:00.801) 0:06:36.934 ********** 2026-03-27 01:12:02.243273 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:12:02.243278 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:12:02.243282 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:12:02.243286 | orchestrator | 2026-03-27 01:12:02.243291 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-27 01:12:02.243308 | orchestrator | Friday 27 March 2026 01:10:47 +0000 (0:00:17.971) 0:06:54.906 ********** 2026-03-27 01:12:02.243313 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.243318 | orchestrator | 2026-03-27 01:12:02.243324 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-27 01:12:02.243331 | orchestrator | Friday 27 March 2026 01:10:47 +0000 (0:00:00.333) 0:06:55.240 ********** 2026-03-27 01:12:02.243337 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.243344 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.243352 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.243360 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.243367 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.243374 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-27 01:12:02.243383 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-27 01:12:02.243390 | orchestrator | 2026-03-27 01:12:02.243398 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-27 01:12:02.243404 | orchestrator | Friday 27 March 2026 01:11:10 +0000 (0:00:22.189) 0:07:17.429 ********** 2026-03-27 01:12:02.243411 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.243471 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.243479 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.243486 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.243493 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.243498 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.243504 | orchestrator | 2026-03-27 01:12:02.243511 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-27 01:12:02.243517 | orchestrator | Friday 27 March 2026 01:11:19 +0000 (0:00:09.026) 0:07:26.456 ********** 2026-03-27 01:12:02.243535 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.243545 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.243551 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.243557 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.243563 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.243569 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-27 01:12:02.243576 | orchestrator | 2026-03-27 01:12:02.243582 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-27 01:12:02.243588 | orchestrator | Friday 27 March 2026 01:11:22 +0000 (0:00:03.453) 0:07:29.909 ********** 2026-03-27 01:12:02.243594 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-27 01:12:02.243599 | orchestrator | 2026-03-27 01:12:02.243605 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-27 01:12:02.243610 | orchestrator | Friday 27 March 2026 01:11:37 +0000 (0:00:14.817) 0:07:44.726 ********** 2026-03-27 01:12:02.243618 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-27 01:12:02.243623 | orchestrator | 2026-03-27 01:12:02.243630 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-27 01:12:02.243637 | orchestrator | Friday 27 March 2026 01:11:38 +0000 (0:00:01.409) 0:07:46.136 ********** 2026-03-27 01:12:02.243644 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.243649 | orchestrator | 2026-03-27 01:12:02.243656 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-27 01:12:02.243662 | orchestrator | Friday 27 March 2026 01:11:40 +0000 (0:00:01.508) 0:07:47.644 ********** 2026-03-27 01:12:02.243669 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-27 01:12:02.243748 | orchestrator | 2026-03-27 01:12:02.243767 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-27 01:12:02.243774 | orchestrator | Friday 27 March 2026 01:11:53 +0000 (0:00:13.098) 0:08:00.743 ********** 2026-03-27 01:12:02.243780 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:12:02.243800 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:12:02.243808 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:12:02.243815 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:02.243821 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:12:02.243827 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:12:02.243834 | orchestrator | 2026-03-27 01:12:02.243843 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-27 01:12:02.243853 | orchestrator | 2026-03-27 01:12:02.243862 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-27 01:12:02.243884 | orchestrator | Friday 27 March 2026 01:11:55 +0000 (0:00:01.714) 0:08:02.457 ********** 2026-03-27 01:12:02.243892 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:02.243898 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:02.243904 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:02.243909 | orchestrator | 2026-03-27 01:12:02.243916 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-27 01:12:02.243922 | orchestrator | 2026-03-27 01:12:02.243929 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-27 01:12:02.243934 | orchestrator | Friday 27 March 2026 01:11:56 +0000 (0:00:01.059) 0:08:03.516 ********** 2026-03-27 01:12:02.243945 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.243954 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.243960 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.243966 | orchestrator | 2026-03-27 01:12:02.243972 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-27 01:12:02.243978 | orchestrator | 2026-03-27 01:12:02.243985 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-27 01:12:02.243992 | orchestrator | Friday 27 March 2026 01:11:56 +0000 (0:00:00.507) 0:08:04.024 ********** 2026-03-27 01:12:02.243998 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-27 01:12:02.244003 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-27 01:12:02.244009 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-27 01:12:02.244015 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-27 01:12:02.244022 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-27 01:12:02.244029 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-27 01:12:02.244034 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:12:02.244086 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-27 01:12:02.244092 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-27 01:12:02.244098 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-27 01:12:02.244104 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-27 01:12:02.244110 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-27 01:12:02.244116 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-27 01:12:02.244122 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:12:02.244128 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-27 01:12:02.244137 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-27 01:12:02.244145 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-27 01:12:02.244150 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-27 01:12:02.244156 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-27 01:12:02.244161 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-27 01:12:02.244167 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:12:02.244173 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-27 01:12:02.244178 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-27 01:12:02.244184 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-27 01:12:02.244189 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-27 01:12:02.244214 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-27 01:12:02.244221 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-27 01:12:02.244226 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.244231 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-27 01:12:02.244237 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-27 01:12:02.244242 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-27 01:12:02.244248 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-27 01:12:02.244254 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-27 01:12:02.244261 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-27 01:12:02.244270 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.244276 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-27 01:12:02.244282 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-27 01:12:02.244289 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-27 01:12:02.244295 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-27 01:12:02.244300 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-27 01:12:02.244306 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-27 01:12:02.244312 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.244319 | orchestrator | 2026-03-27 01:12:02.244325 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-27 01:12:02.244331 | orchestrator | 2026-03-27 01:12:02.244338 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-27 01:12:02.244344 | orchestrator | Friday 27 March 2026 01:11:57 +0000 (0:00:01.195) 0:08:05.219 ********** 2026-03-27 01:12:02.244350 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-27 01:12:02.244356 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-27 01:12:02.244363 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.244372 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-27 01:12:02.244378 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-27 01:12:02.244384 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.244390 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-27 01:12:02.244396 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-27 01:12:02.244402 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.244408 | orchestrator | 2026-03-27 01:12:02.244424 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-27 01:12:02.244430 | orchestrator | 2026-03-27 01:12:02.244436 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-27 01:12:02.244442 | orchestrator | Friday 27 March 2026 01:11:58 +0000 (0:00:00.634) 0:08:05.854 ********** 2026-03-27 01:12:02.244449 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.244456 | orchestrator | 2026-03-27 01:12:02.244462 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-27 01:12:02.244468 | orchestrator | 2026-03-27 01:12:02.244472 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-27 01:12:02.244476 | orchestrator | Friday 27 March 2026 01:11:59 +0000 (0:00:00.684) 0:08:06.539 ********** 2026-03-27 01:12:02.244480 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:02.244483 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:02.244487 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:02.244491 | orchestrator | 2026-03-27 01:12:02.244495 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:12:02.244499 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:12:02.244512 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-03-27 01:12:02.244517 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-27 01:12:02.244521 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-27 01:12:02.244524 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-27 01:12:02.244528 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-27 01:12:02.244532 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-27 01:12:02.244536 | orchestrator | 2026-03-27 01:12:02.244540 | orchestrator | 2026-03-27 01:12:02.244544 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:12:02.244548 | orchestrator | Friday 27 March 2026 01:11:59 +0000 (0:00:00.573) 0:08:07.112 ********** 2026-03-27 01:12:02.244551 | orchestrator | =============================================================================== 2026-03-27 01:12:02.244555 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 31.22s 2026-03-27 01:12:02.244559 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.07s 2026-03-27 01:12:02.244563 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 24.03s 2026-03-27 01:12:02.244572 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.19s 2026-03-27 01:12:02.244576 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.55s 2026-03-27 01:12:02.244581 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.22s 2026-03-27 01:12:02.244588 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 17.97s 2026-03-27 01:12:02.244593 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.55s 2026-03-27 01:12:02.244597 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.82s 2026-03-27 01:12:02.244601 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.25s 2026-03-27 01:12:02.244605 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.10s 2026-03-27 01:12:02.244609 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.91s 2026-03-27 01:12:02.244612 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.03s 2026-03-27 01:12:02.244616 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.57s 2026-03-27 01:12:02.244620 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.42s 2026-03-27 01:12:02.244624 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.03s 2026-03-27 01:12:02.244627 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.52s 2026-03-27 01:12:02.244631 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.42s 2026-03-27 01:12:02.244635 | orchestrator | nova : Copying over nova.conf for nova-api-bootstrap -------------------- 7.16s 2026-03-27 01:12:02.244639 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.08s 2026-03-27 01:12:02.244643 | orchestrator | 2026-03-27 01:12:02 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:12:02.244647 | orchestrator | 2026-03-27 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:12:05.286963 | orchestrator | 2026-03-27 01:12:05 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:12:05.287097 | orchestrator | 2026-03-27 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:12:08.338242 | orchestrator | 2026-03-27 01:12:08 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:12:08.338297 | orchestrator | 2026-03-27 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:12:11.382212 | orchestrator | 2026-03-27 01:12:11 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:12:11.382264 | orchestrator | 2026-03-27 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:12:14.431372 | orchestrator | 2026-03-27 01:12:14 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:12:14.431429 | orchestrator | 2026-03-27 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:12:17.480003 | orchestrator | 2026-03-27 01:12:17 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:12:17.480123 | orchestrator | 2026-03-27 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:12:20.521594 | orchestrator | 2026-03-27 01:12:20 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state STARTED 2026-03-27 01:12:20.521654 | orchestrator | 2026-03-27 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-03-27 01:12:23.572045 | orchestrator | 2026-03-27 01:12:23 | INFO  | Task 9046b4ed-8862-4696-8a3b-2a874814ea77 is in state SUCCESS 2026-03-27 01:12:23.574459 | orchestrator | 2026-03-27 01:12:23.574562 | orchestrator | 2026-03-27 01:12:23.574573 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:12:23.574585 | orchestrator | 2026-03-27 01:12:23.574597 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:12:23.574603 | orchestrator | Friday 27 March 2026 01:07:56 +0000 (0:00:00.346) 0:00:00.346 ********** 2026-03-27 01:12:23.574610 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.574616 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:12:23.574622 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:12:23.574638 | orchestrator | 2026-03-27 01:12:23.574654 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:12:23.574661 | orchestrator | Friday 27 March 2026 01:07:56 +0000 (0:00:00.320) 0:00:00.667 ********** 2026-03-27 01:12:23.574667 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-27 01:12:23.574674 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-27 01:12:23.574680 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-27 01:12:23.574685 | orchestrator | 2026-03-27 01:12:23.574691 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-27 01:12:23.574697 | orchestrator | 2026-03-27 01:12:23.574703 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-27 01:12:23.574709 | orchestrator | Friday 27 March 2026 01:07:57 +0000 (0:00:00.347) 0:00:01.015 ********** 2026-03-27 01:12:23.574715 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:23.574722 | orchestrator | 2026-03-27 01:12:23.574748 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-27 01:12:23.574765 | orchestrator | Friday 27 March 2026 01:07:58 +0000 (0:00:00.711) 0:00:01.727 ********** 2026-03-27 01:12:23.574772 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-27 01:12:23.574779 | orchestrator | 2026-03-27 01:12:23.574785 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-27 01:12:23.574791 | orchestrator | Friday 27 March 2026 01:08:01 +0000 (0:00:03.531) 0:00:05.258 ********** 2026-03-27 01:12:23.574798 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-27 01:12:23.574804 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-27 01:12:23.574884 | orchestrator | 2026-03-27 01:12:23.574891 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-27 01:12:23.574897 | orchestrator | Friday 27 March 2026 01:08:07 +0000 (0:00:05.851) 0:00:11.110 ********** 2026-03-27 01:12:23.574903 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-27 01:12:23.574909 | orchestrator | 2026-03-27 01:12:23.574914 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-27 01:12:23.574920 | orchestrator | Friday 27 March 2026 01:08:10 +0000 (0:00:02.883) 0:00:13.994 ********** 2026-03-27 01:12:23.574926 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-27 01:12:23.574937 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-27 01:12:23.574943 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-27 01:12:23.574949 | orchestrator | 2026-03-27 01:12:23.574955 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-27 01:12:23.574961 | orchestrator | Friday 27 March 2026 01:08:17 +0000 (0:00:07.616) 0:00:21.611 ********** 2026-03-27 01:12:23.574967 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-27 01:12:23.574972 | orchestrator | 2026-03-27 01:12:23.574978 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-27 01:12:23.574983 | orchestrator | Friday 27 March 2026 01:08:20 +0000 (0:00:03.031) 0:00:24.642 ********** 2026-03-27 01:12:23.574989 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-27 01:12:23.574994 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-27 01:12:23.575000 | orchestrator | 2026-03-27 01:12:23.575061 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-27 01:12:23.575071 | orchestrator | Friday 27 March 2026 01:08:28 +0000 (0:00:07.344) 0:00:31.987 ********** 2026-03-27 01:12:23.575079 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-27 01:12:23.575085 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-27 01:12:23.575092 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-27 01:12:23.575099 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-27 01:12:23.575107 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-27 01:12:23.575115 | orchestrator | 2026-03-27 01:12:23.575132 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-27 01:12:23.575151 | orchestrator | Friday 27 March 2026 01:08:44 +0000 (0:00:16.223) 0:00:48.211 ********** 2026-03-27 01:12:23.575164 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:23.575177 | orchestrator | 2026-03-27 01:12:23.575191 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-27 01:12:23.575205 | orchestrator | Friday 27 March 2026 01:08:45 +0000 (0:00:00.726) 0:00:48.937 ********** 2026-03-27 01:12:23.575218 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575231 | orchestrator | 2026-03-27 01:12:23.575245 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-27 01:12:23.575258 | orchestrator | Friday 27 March 2026 01:08:50 +0000 (0:00:05.203) 0:00:54.140 ********** 2026-03-27 01:12:23.575267 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575272 | orchestrator | 2026-03-27 01:12:23.575277 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-27 01:12:23.575300 | orchestrator | Friday 27 March 2026 01:08:54 +0000 (0:00:04.493) 0:00:58.634 ********** 2026-03-27 01:12:23.575313 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.575326 | orchestrator | 2026-03-27 01:12:23.575334 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-27 01:12:23.575338 | orchestrator | Friday 27 March 2026 01:08:57 +0000 (0:00:03.002) 0:01:01.637 ********** 2026-03-27 01:12:23.575353 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-27 01:12:23.575363 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-27 01:12:23.575372 | orchestrator | 2026-03-27 01:12:23.575378 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-27 01:12:23.575387 | orchestrator | Friday 27 March 2026 01:09:07 +0000 (0:00:09.766) 0:01:11.403 ********** 2026-03-27 01:12:23.575396 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-27 01:12:23.575404 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-27 01:12:23.575414 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-27 01:12:23.575420 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-27 01:12:23.575429 | orchestrator | 2026-03-27 01:12:23.575437 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-27 01:12:23.575452 | orchestrator | Friday 27 March 2026 01:09:24 +0000 (0:00:17.133) 0:01:28.537 ********** 2026-03-27 01:12:23.575461 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575470 | orchestrator | 2026-03-27 01:12:23.575478 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-27 01:12:23.575487 | orchestrator | Friday 27 March 2026 01:09:28 +0000 (0:00:03.801) 0:01:32.339 ********** 2026-03-27 01:12:23.575496 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575503 | orchestrator | 2026-03-27 01:12:23.575512 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-27 01:12:23.575521 | orchestrator | Friday 27 March 2026 01:09:33 +0000 (0:00:05.100) 0:01:37.440 ********** 2026-03-27 01:12:23.575529 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:23.575538 | orchestrator | 2026-03-27 01:12:23.575548 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-27 01:12:23.575555 | orchestrator | Friday 27 March 2026 01:09:34 +0000 (0:00:00.644) 0:01:38.084 ********** 2026-03-27 01:12:23.575564 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.575571 | orchestrator | 2026-03-27 01:12:23.575576 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-27 01:12:23.575582 | orchestrator | Friday 27 March 2026 01:09:38 +0000 (0:00:04.036) 0:01:42.121 ********** 2026-03-27 01:12:23.575587 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:23.575593 | orchestrator | 2026-03-27 01:12:23.575598 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-27 01:12:23.575604 | orchestrator | Friday 27 March 2026 01:09:39 +0000 (0:00:01.021) 0:01:43.143 ********** 2026-03-27 01:12:23.575609 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575615 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.575620 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.575626 | orchestrator | 2026-03-27 01:12:23.575631 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-27 01:12:23.575637 | orchestrator | Friday 27 March 2026 01:09:44 +0000 (0:00:05.308) 0:01:48.451 ********** 2026-03-27 01:12:23.575642 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.575648 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575653 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.575659 | orchestrator | 2026-03-27 01:12:23.575664 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-27 01:12:23.575668 | orchestrator | Friday 27 March 2026 01:09:49 +0000 (0:00:04.405) 0:01:52.857 ********** 2026-03-27 01:12:23.575674 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575679 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.575684 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.575695 | orchestrator | 2026-03-27 01:12:23.575702 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-27 01:12:23.575707 | orchestrator | Friday 27 March 2026 01:09:49 +0000 (0:00:00.704) 0:01:53.561 ********** 2026-03-27 01:12:23.575712 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:12:23.575718 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:12:23.575723 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.575729 | orchestrator | 2026-03-27 01:12:23.575734 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-27 01:12:23.575740 | orchestrator | Friday 27 March 2026 01:09:51 +0000 (0:00:01.723) 0:01:55.285 ********** 2026-03-27 01:12:23.575746 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.575752 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575757 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.575763 | orchestrator | 2026-03-27 01:12:23.575768 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-27 01:12:23.575774 | orchestrator | Friday 27 March 2026 01:09:52 +0000 (0:00:01.169) 0:01:56.454 ********** 2026-03-27 01:12:23.575779 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575784 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.575789 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.575795 | orchestrator | 2026-03-27 01:12:23.575800 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-27 01:12:23.575806 | orchestrator | Friday 27 March 2026 01:09:53 +0000 (0:00:01.011) 0:01:57.466 ********** 2026-03-27 01:12:23.575813 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.575819 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.575825 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575832 | orchestrator | 2026-03-27 01:12:23.575844 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-27 01:12:23.575851 | orchestrator | Friday 27 March 2026 01:09:55 +0000 (0:00:02.042) 0:01:59.508 ********** 2026-03-27 01:12:23.575857 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.575863 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.575869 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.575875 | orchestrator | 2026-03-27 01:12:23.575882 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-27 01:12:23.575888 | orchestrator | Friday 27 March 2026 01:09:57 +0000 (0:00:01.583) 0:02:01.091 ********** 2026-03-27 01:12:23.575894 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.575900 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:12:23.575906 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:12:23.575912 | orchestrator | 2026-03-27 01:12:23.575918 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-27 01:12:23.575924 | orchestrator | Friday 27 March 2026 01:09:58 +0000 (0:00:00.594) 0:02:01.686 ********** 2026-03-27 01:12:23.575930 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:12:23.575937 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:12:23.575943 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.575949 | orchestrator | 2026-03-27 01:12:23.575955 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-27 01:12:23.575962 | orchestrator | Friday 27 March 2026 01:10:00 +0000 (0:00:02.505) 0:02:04.191 ********** 2026-03-27 01:12:23.575968 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:23.575974 | orchestrator | 2026-03-27 01:12:23.575980 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-27 01:12:23.575990 | orchestrator | Friday 27 March 2026 01:10:01 +0000 (0:00:00.735) 0:02:04.927 ********** 2026-03-27 01:12:23.575999 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.576005 | orchestrator | 2026-03-27 01:12:23.576027 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-27 01:12:23.576032 | orchestrator | Friday 27 March 2026 01:10:04 +0000 (0:00:03.034) 0:02:07.961 ********** 2026-03-27 01:12:23.576042 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.576048 | orchestrator | 2026-03-27 01:12:23.576054 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-27 01:12:23.576059 | orchestrator | Friday 27 March 2026 01:10:07 +0000 (0:00:02.831) 0:02:10.792 ********** 2026-03-27 01:12:23.576065 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-27 01:12:23.576071 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-27 01:12:23.576077 | orchestrator | 2026-03-27 01:12:23.576082 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-27 01:12:23.576087 | orchestrator | Friday 27 March 2026 01:10:12 +0000 (0:00:05.836) 0:02:16.629 ********** 2026-03-27 01:12:23.576093 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.576098 | orchestrator | 2026-03-27 01:12:23.576104 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-27 01:12:23.576109 | orchestrator | Friday 27 March 2026 01:10:15 +0000 (0:00:02.688) 0:02:19.318 ********** 2026-03-27 01:12:23.576115 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:12:23.576121 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:12:23.576126 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:12:23.576132 | orchestrator | 2026-03-27 01:12:23.576138 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-27 01:12:23.576144 | orchestrator | Friday 27 March 2026 01:10:15 +0000 (0:00:00.306) 0:02:19.624 ********** 2026-03-27 01:12:23.576152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.576167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.576174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.576186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.576194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.576200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.576207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576271 | orchestrator | 2026-03-27 01:12:23.576275 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-27 01:12:23.576280 | orchestrator | Friday 27 March 2026 01:10:18 +0000 (0:00:02.427) 0:02:22.051 ********** 2026-03-27 01:12:23.576286 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:23.576291 | orchestrator | 2026-03-27 01:12:23.576299 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-27 01:12:23.576310 | orchestrator | Friday 27 March 2026 01:10:18 +0000 (0:00:00.121) 0:02:22.173 ********** 2026-03-27 01:12:23.576320 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:23.576332 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:23.576338 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:23.576344 | orchestrator | 2026-03-27 01:12:23.576349 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-27 01:12:23.576355 | orchestrator | Friday 27 March 2026 01:10:18 +0000 (0:00:00.265) 0:02:22.439 ********** 2026-03-27 01:12:23.576363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 01:12:23.576369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 01:12:23.576375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.576381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.576387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:12:23.576393 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:23.576404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 01:12:23.576414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 01:12:23.576425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.576432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.576438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:12:23.576443 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:23.576449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 01:12:23.576463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 01:12:23.576470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.576478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.576484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:12:23.576490 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:23.576496 | orchestrator | 2026-03-27 01:12:23.576502 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-27 01:12:23.576507 | orchestrator | Friday 27 March 2026 01:10:19 +0000 (0:00:00.644) 0:02:23.083 ********** 2026-03-27 01:12:23.576513 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:12:23.576519 | orchestrator | 2026-03-27 01:12:23.576524 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-27 01:12:23.576530 | orchestrator | Friday 27 March 2026 01:10:20 +0000 (0:00:00.722) 0:02:23.806 ********** 2026-03-27 01:12:23.576536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.576892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.576913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.576919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.576925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.576931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.576937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.576992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577025 | orchestrator | 2026-03-27 01:12:23.577031 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-27 01:12:23.577037 | orchestrator | Friday 27 March 2026 01:10:24 +0000 (0:00:04.414) 0:02:28.220 ********** 2026-03-27 01:12:23.577043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 01:12:23.577051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 01:12:23.577057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:12:23.577079 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:23.577089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 01:12:23.577095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 01:12:23.577103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:12:23.577125 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:23.577131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 01:12:23.577137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 01:12:23.577146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:12:23.577167 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:23.577173 | orchestrator | 2026-03-27 01:12:23.577178 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-27 01:12:23.577184 | orchestrator | Friday 27 March 2026 01:10:25 +0000 (0:00:00.687) 0:02:28.908 ********** 2026-03-27 01:12:23.577190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 01:12:23.577200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 01:12:23.577206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:12:23.577229 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:23.577235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 01:12:23.577247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 01:12:23.577253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:12:23.577273 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:23.577280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-27 01:12:23.577286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-27 01:12:23.577292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-27 01:12:23.577309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-27 01:12:23.577314 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:23.577319 | orchestrator | 2026-03-27 01:12:23.577323 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-27 01:12:23.577329 | orchestrator | Friday 27 March 2026 01:10:26 +0000 (0:00:01.101) 0:02:30.010 ********** 2026-03-27 01:12:23.577339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.577348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.577359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.577365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.577371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.577377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.577386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577456 | orchestrator | 2026-03-27 01:12:23.577462 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-27 01:12:23.577468 | orchestrator | Friday 27 March 2026 01:10:31 +0000 (0:00:05.572) 0:02:35.583 ********** 2026-03-27 01:12:23.577473 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-27 01:12:23.577480 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-27 01:12:23.577486 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-27 01:12:23.577492 | orchestrator | 2026-03-27 01:12:23.577498 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-27 01:12:23.577504 | orchestrator | Friday 27 March 2026 01:10:33 +0000 (0:00:01.821) 0:02:37.404 ********** 2026-03-27 01:12:23.577511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.577520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.577531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.577537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.577550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.577557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.577563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.577632 | orchestrator | 2026-03-27 01:12:23.577644 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-27 01:12:23.577658 | orchestrator | Friday 27 March 2026 01:10:52 +0000 (0:00:19.221) 0:02:56.625 ********** 2026-03-27 01:12:23.577666 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.577672 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.577678 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.577684 | orchestrator | 2026-03-27 01:12:23.577690 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-27 01:12:23.577697 | orchestrator | Friday 27 March 2026 01:10:54 +0000 (0:00:01.700) 0:02:58.326 ********** 2026-03-27 01:12:23.577703 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-27 01:12:23.577710 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-27 01:12:23.577719 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-27 01:12:23.577726 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-27 01:12:23.577733 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-27 01:12:23.577743 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-27 01:12:23.577749 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-27 01:12:23.577756 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-27 01:12:23.577762 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-27 01:12:23.577769 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-27 01:12:23.577776 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-27 01:12:23.577782 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-27 01:12:23.577788 | orchestrator | 2026-03-27 01:12:23.577795 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-27 01:12:23.577801 | orchestrator | Friday 27 March 2026 01:10:59 +0000 (0:00:04.986) 0:03:03.312 ********** 2026-03-27 01:12:23.577808 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-27 01:12:23.577814 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-27 01:12:23.577820 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-27 01:12:23.577827 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-27 01:12:23.577833 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-27 01:12:23.577842 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-27 01:12:23.577849 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-27 01:12:23.577856 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-27 01:12:23.577863 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-27 01:12:23.577869 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-27 01:12:23.577876 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-27 01:12:23.577882 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-27 01:12:23.577888 | orchestrator | 2026-03-27 01:12:23.577896 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-27 01:12:23.577902 | orchestrator | Friday 27 March 2026 01:11:04 +0000 (0:00:04.817) 0:03:08.130 ********** 2026-03-27 01:12:23.577908 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-27 01:12:23.577914 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-27 01:12:23.577920 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-27 01:12:23.577925 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-27 01:12:23.577930 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-27 01:12:23.577935 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-27 01:12:23.577941 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-27 01:12:23.577947 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-27 01:12:23.577953 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-27 01:12:23.577959 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-27 01:12:23.577964 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-27 01:12:23.577970 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-27 01:12:23.577976 | orchestrator | 2026-03-27 01:12:23.577981 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-27 01:12:23.577987 | orchestrator | Friday 27 March 2026 01:11:09 +0000 (0:00:05.383) 0:03:13.514 ********** 2026-03-27 01:12:23.577993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.578079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.578094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-27 01:12:23.578101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.578107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.578113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-27 01:12:23.578123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.578133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.578140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.578149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.578155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.578161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-27 01:12:23.578167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.578177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.578186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-27 01:12:23.578193 | orchestrator | 2026-03-27 01:12:23.578199 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-27 01:12:23.578205 | orchestrator | Friday 27 March 2026 01:11:14 +0000 (0:00:04.367) 0:03:17.882 ********** 2026-03-27 01:12:23.578211 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:12:23.578216 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:12:23.578222 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:12:23.578229 | orchestrator | 2026-03-27 01:12:23.578235 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-27 01:12:23.578241 | orchestrator | Friday 27 March 2026 01:11:15 +0000 (0:00:01.009) 0:03:18.891 ********** 2026-03-27 01:12:23.578247 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578253 | orchestrator | 2026-03-27 01:12:23.578259 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-27 01:12:23.578265 | orchestrator | Friday 27 March 2026 01:11:17 +0000 (0:00:02.702) 0:03:21.594 ********** 2026-03-27 01:12:23.578271 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578276 | orchestrator | 2026-03-27 01:12:23.578281 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-27 01:12:23.578288 | orchestrator | Friday 27 March 2026 01:11:20 +0000 (0:00:02.428) 0:03:24.023 ********** 2026-03-27 01:12:23.578294 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578300 | orchestrator | 2026-03-27 01:12:23.578306 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-27 01:12:23.578314 | orchestrator | Friday 27 March 2026 01:11:22 +0000 (0:00:02.374) 0:03:26.397 ********** 2026-03-27 01:12:23.578321 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578327 | orchestrator | 2026-03-27 01:12:23.578334 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-27 01:12:23.578340 | orchestrator | Friday 27 March 2026 01:11:25 +0000 (0:00:02.540) 0:03:28.938 ********** 2026-03-27 01:12:23.578346 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578352 | orchestrator | 2026-03-27 01:12:23.578358 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-27 01:12:23.578365 | orchestrator | Friday 27 March 2026 01:11:45 +0000 (0:00:20.252) 0:03:49.191 ********** 2026-03-27 01:12:23.578371 | orchestrator | 2026-03-27 01:12:23.578377 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-27 01:12:23.578383 | orchestrator | Friday 27 March 2026 01:11:45 +0000 (0:00:00.062) 0:03:49.253 ********** 2026-03-27 01:12:23.578394 | orchestrator | 2026-03-27 01:12:23.578400 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-27 01:12:23.578406 | orchestrator | Friday 27 March 2026 01:11:45 +0000 (0:00:00.071) 0:03:49.324 ********** 2026-03-27 01:12:23.578413 | orchestrator | 2026-03-27 01:12:23.578419 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-27 01:12:23.578425 | orchestrator | Friday 27 March 2026 01:11:45 +0000 (0:00:00.067) 0:03:49.392 ********** 2026-03-27 01:12:23.578431 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578438 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.578444 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.578451 | orchestrator | 2026-03-27 01:12:23.578457 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-27 01:12:23.578463 | orchestrator | Friday 27 March 2026 01:11:53 +0000 (0:00:08.230) 0:03:57.622 ********** 2026-03-27 01:12:23.578470 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578476 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.578482 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.578489 | orchestrator | 2026-03-27 01:12:23.578495 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-27 01:12:23.578501 | orchestrator | Friday 27 March 2026 01:12:04 +0000 (0:00:10.436) 0:04:08.058 ********** 2026-03-27 01:12:23.578507 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578512 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.578518 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.578525 | orchestrator | 2026-03-27 01:12:23.578531 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-27 01:12:23.578537 | orchestrator | Friday 27 March 2026 01:12:10 +0000 (0:00:05.798) 0:04:13.856 ********** 2026-03-27 01:12:23.578543 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578550 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.578556 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.578562 | orchestrator | 2026-03-27 01:12:23.578569 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-27 01:12:23.578574 | orchestrator | Friday 27 March 2026 01:12:15 +0000 (0:00:05.454) 0:04:19.311 ********** 2026-03-27 01:12:23.578581 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:12:23.578587 | orchestrator | changed: [testbed-node-1] 2026-03-27 01:12:23.578593 | orchestrator | changed: [testbed-node-2] 2026-03-27 01:12:23.578599 | orchestrator | 2026-03-27 01:12:23.578605 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:12:23.578612 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-27 01:12:23.578619 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 01:12:23.578625 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-27 01:12:23.578631 | orchestrator | 2026-03-27 01:12:23.578637 | orchestrator | 2026-03-27 01:12:23.578644 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:12:23.578650 | orchestrator | Friday 27 March 2026 01:12:21 +0000 (0:00:06.079) 0:04:25.391 ********** 2026-03-27 01:12:23.578659 | orchestrator | =============================================================================== 2026-03-27 01:12:23.578665 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.25s 2026-03-27 01:12:23.578671 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 19.22s 2026-03-27 01:12:23.578677 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.13s 2026-03-27 01:12:23.578683 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.22s 2026-03-27 01:12:23.578693 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.44s 2026-03-27 01:12:23.578700 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.77s 2026-03-27 01:12:23.578706 | orchestrator | octavia : Restart octavia-api container --------------------------------- 8.23s 2026-03-27 01:12:23.578712 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.62s 2026-03-27 01:12:23.578719 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.34s 2026-03-27 01:12:23.578725 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.08s 2026-03-27 01:12:23.578731 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.85s 2026-03-27 01:12:23.578738 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.84s 2026-03-27 01:12:23.578744 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.80s 2026-03-27 01:12:23.578750 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.57s 2026-03-27 01:12:23.578759 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.45s 2026-03-27 01:12:23.578765 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.38s 2026-03-27 01:12:23.578771 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.31s 2026-03-27 01:12:23.578778 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.20s 2026-03-27 01:12:23.578784 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.10s 2026-03-27 01:12:23.578790 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 4.99s 2026-03-27 01:12:23.578796 | orchestrator | 2026-03-27 01:12:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:26.621131 | orchestrator | 2026-03-27 01:12:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:29.662087 | orchestrator | 2026-03-27 01:12:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:32.702212 | orchestrator | 2026-03-27 01:12:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:35.747644 | orchestrator | 2026-03-27 01:12:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:38.789920 | orchestrator | 2026-03-27 01:12:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:41.834944 | orchestrator | 2026-03-27 01:12:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:44.874675 | orchestrator | 2026-03-27 01:12:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:47.918679 | orchestrator | 2026-03-27 01:12:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:50.966772 | orchestrator | 2026-03-27 01:12:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:54.007793 | orchestrator | 2026-03-27 01:12:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:12:57.067671 | orchestrator | 2026-03-27 01:12:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:13:00.105646 | orchestrator | 2026-03-27 01:13:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:13:03.148542 | orchestrator | 2026-03-27 01:13:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:13:06.189616 | orchestrator | 2026-03-27 01:13:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:13:09.229371 | orchestrator | 2026-03-27 01:13:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:13:12.265260 | orchestrator | 2026-03-27 01:13:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:13:15.310231 | orchestrator | 2026-03-27 01:13:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:13:18.356198 | orchestrator | 2026-03-27 01:13:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:13:21.403758 | orchestrator | 2026-03-27 01:13:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-27 01:13:24.445110 | orchestrator | 2026-03-27 01:13:24.656072 | orchestrator | 2026-03-27 01:13:24.659994 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Mar 27 01:13:24 UTC 2026 2026-03-27 01:13:24.660068 | orchestrator | 2026-03-27 01:13:25.073907 | orchestrator | ok: Runtime: 0:33:04.456930 2026-03-27 01:13:25.342098 | 2026-03-27 01:13:25.342292 | TASK [Bootstrap services] 2026-03-27 01:13:26.099022 | orchestrator | 2026-03-27 01:13:26.099114 | orchestrator | # BOOTSTRAP 2026-03-27 01:13:26.099123 | orchestrator | 2026-03-27 01:13:26.099127 | orchestrator | + set -e 2026-03-27 01:13:26.099136 | orchestrator | + echo 2026-03-27 01:13:26.099144 | orchestrator | + echo '# BOOTSTRAP' 2026-03-27 01:13:26.099150 | orchestrator | + echo 2026-03-27 01:13:26.099165 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-27 01:13:26.108535 | orchestrator | + set -e 2026-03-27 01:13:26.108580 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-27 01:13:30.706354 | orchestrator | 2026-03-27 01:13:30 | INFO  | It takes a moment until task db8aea44-0ee6-4c4b-b466-00d70d74cd4b (flavor-manager) has been started and output is visible here. 2026-03-27 01:13:38.946246 | orchestrator | 2026-03-27 01:13:35 | INFO  | Flavor SCS-1L-1 created 2026-03-27 01:13:38.946318 | orchestrator | 2026-03-27 01:13:35 | INFO  | Flavor SCS-1L-1-5 created 2026-03-27 01:13:38.946327 | orchestrator | 2026-03-27 01:13:35 | INFO  | Flavor SCS-1V-2 created 2026-03-27 01:13:38.946333 | orchestrator | 2026-03-27 01:13:35 | INFO  | Flavor SCS-1V-2-5 created 2026-03-27 01:13:38.946342 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-1V-4 created 2026-03-27 01:13:38.946350 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-1V-4-10 created 2026-03-27 01:13:38.946358 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-1V-8 created 2026-03-27 01:13:38.946366 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-1V-8-20 created 2026-03-27 01:13:38.946382 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-2V-4 created 2026-03-27 01:13:38.946391 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-2V-4-10 created 2026-03-27 01:13:38.946398 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-2V-8 created 2026-03-27 01:13:38.946407 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-2V-8-20 created 2026-03-27 01:13:38.946415 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-2V-16 created 2026-03-27 01:13:38.946422 | orchestrator | 2026-03-27 01:13:36 | INFO  | Flavor SCS-2V-16-50 created 2026-03-27 01:13:38.946431 | orchestrator | 2026-03-27 01:13:37 | INFO  | Flavor SCS-4V-8 created 2026-03-27 01:13:38.946438 | orchestrator | 2026-03-27 01:13:37 | INFO  | Flavor SCS-4V-8-20 created 2026-03-27 01:13:38.946446 | orchestrator | 2026-03-27 01:13:37 | INFO  | Flavor SCS-4V-16 created 2026-03-27 01:13:38.946454 | orchestrator | 2026-03-27 01:13:37 | INFO  | Flavor SCS-4V-16-50 created 2026-03-27 01:13:38.946462 | orchestrator | 2026-03-27 01:13:37 | INFO  | Flavor SCS-4V-32 created 2026-03-27 01:13:38.946470 | orchestrator | 2026-03-27 01:13:37 | INFO  | Flavor SCS-4V-32-100 created 2026-03-27 01:13:38.946478 | orchestrator | 2026-03-27 01:13:37 | INFO  | Flavor SCS-8V-16 created 2026-03-27 01:13:38.946487 | orchestrator | 2026-03-27 01:13:37 | INFO  | Flavor SCS-8V-16-50 created 2026-03-27 01:13:38.946496 | orchestrator | 2026-03-27 01:13:38 | INFO  | Flavor SCS-8V-32 created 2026-03-27 01:13:38.946505 | orchestrator | 2026-03-27 01:13:38 | INFO  | Flavor SCS-8V-32-100 created 2026-03-27 01:13:38.946513 | orchestrator | 2026-03-27 01:13:38 | INFO  | Flavor SCS-16V-32 created 2026-03-27 01:13:38.946521 | orchestrator | 2026-03-27 01:13:38 | INFO  | Flavor SCS-16V-32-100 created 2026-03-27 01:13:38.946529 | orchestrator | 2026-03-27 01:13:38 | INFO  | Flavor SCS-2V-4-20s created 2026-03-27 01:13:38.946537 | orchestrator | 2026-03-27 01:13:38 | INFO  | Flavor SCS-4V-8-50s created 2026-03-27 01:13:38.946545 | orchestrator | 2026-03-27 01:13:38 | INFO  | Flavor SCS-4V-16-100s created 2026-03-27 01:13:38.946553 | orchestrator | 2026-03-27 01:13:38 | INFO  | Flavor SCS-8V-32-100s created 2026-03-27 01:13:40.563094 | orchestrator | 2026-03-27 01:13:40 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-27 01:13:50.653632 | orchestrator | 2026-03-27 01:13:50 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-27 01:13:50.744174 | orchestrator | 2026-03-27 01:13:50 | INFO  | Task 4d4fba2b-33e7-4251-960d-49bf987232d3 (bootstrap-basic) was prepared for execution. 2026-03-27 01:13:50.744227 | orchestrator | 2026-03-27 01:13:50 | INFO  | It takes a moment until task 4d4fba2b-33e7-4251-960d-49bf987232d3 (bootstrap-basic) has been started and output is visible here. 2026-03-27 01:14:38.009104 | orchestrator | 2026-03-27 01:14:38.009206 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-27 01:14:38.009219 | orchestrator | 2026-03-27 01:14:38.009227 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-27 01:14:38.009234 | orchestrator | Friday 27 March 2026 01:13:54 +0000 (0:00:00.106) 0:00:00.106 ********** 2026-03-27 01:14:38.009241 | orchestrator | ok: [localhost] 2026-03-27 01:14:38.009250 | orchestrator | 2026-03-27 01:14:38.009258 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-27 01:14:38.009265 | orchestrator | Friday 27 March 2026 01:13:56 +0000 (0:00:01.995) 0:00:02.102 ********** 2026-03-27 01:14:38.009274 | orchestrator | ok: [localhost] 2026-03-27 01:14:38.009281 | orchestrator | 2026-03-27 01:14:38.009288 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-27 01:14:38.009295 | orchestrator | Friday 27 March 2026 01:14:05 +0000 (0:00:09.045) 0:00:11.148 ********** 2026-03-27 01:14:38.009302 | orchestrator | changed: [localhost] 2026-03-27 01:14:38.009320 | orchestrator | 2026-03-27 01:14:38.009327 | orchestrator | TASK [Create public network] *************************************************** 2026-03-27 01:14:38.009335 | orchestrator | Friday 27 March 2026 01:14:12 +0000 (0:00:07.462) 0:00:18.610 ********** 2026-03-27 01:14:38.009342 | orchestrator | changed: [localhost] 2026-03-27 01:14:38.009349 | orchestrator | 2026-03-27 01:14:38.009360 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-27 01:14:38.009368 | orchestrator | Friday 27 March 2026 01:14:18 +0000 (0:00:05.442) 0:00:24.052 ********** 2026-03-27 01:14:38.009375 | orchestrator | changed: [localhost] 2026-03-27 01:14:38.009382 | orchestrator | 2026-03-27 01:14:38.009390 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-27 01:14:38.009396 | orchestrator | Friday 27 March 2026 01:14:24 +0000 (0:00:06.545) 0:00:30.597 ********** 2026-03-27 01:14:38.009403 | orchestrator | changed: [localhost] 2026-03-27 01:14:38.009410 | orchestrator | 2026-03-27 01:14:38.009418 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-27 01:14:38.009433 | orchestrator | Friday 27 March 2026 01:14:29 +0000 (0:00:04.655) 0:00:35.253 ********** 2026-03-27 01:14:38.009439 | orchestrator | changed: [localhost] 2026-03-27 01:14:38.009446 | orchestrator | 2026-03-27 01:14:38.009454 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-27 01:14:38.009469 | orchestrator | Friday 27 March 2026 01:14:34 +0000 (0:00:04.598) 0:00:39.852 ********** 2026-03-27 01:14:38.009477 | orchestrator | ok: [localhost] 2026-03-27 01:14:38.009483 | orchestrator | 2026-03-27 01:14:38.009491 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:14:38.009498 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-27 01:14:38.009507 | orchestrator | 2026-03-27 01:14:38.009515 | orchestrator | 2026-03-27 01:14:38.009522 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:14:38.009530 | orchestrator | Friday 27 March 2026 01:14:37 +0000 (0:00:03.770) 0:00:43.622 ********** 2026-03-27 01:14:38.009537 | orchestrator | =============================================================================== 2026-03-27 01:14:38.009544 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.05s 2026-03-27 01:14:38.009576 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.46s 2026-03-27 01:14:38.009583 | orchestrator | Set public network to default ------------------------------------------- 6.55s 2026-03-27 01:14:38.009591 | orchestrator | Create public network --------------------------------------------------- 5.44s 2026-03-27 01:14:38.009601 | orchestrator | Create public subnet ---------------------------------------------------- 4.66s 2026-03-27 01:14:38.009609 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.60s 2026-03-27 01:14:38.009616 | orchestrator | Create manager role ----------------------------------------------------- 3.77s 2026-03-27 01:14:38.009623 | orchestrator | Gathering Facts --------------------------------------------------------- 2.00s 2026-03-27 01:14:40.014997 | orchestrator | 2026-03-27 01:14:40 | INFO  | It takes a moment until task c09ef89b-6a86-478d-9218-233e6b8058f4 (image-manager) has been started and output is visible here. 2026-03-27 01:15:21.370308 | orchestrator | 2026-03-27 01:14:42 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-27 01:15:21.370390 | orchestrator | 2026-03-27 01:14:43 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-27 01:15:21.370398 | orchestrator | 2026-03-27 01:14:43 | INFO  | Importing image Cirros 0.6.2 2026-03-27 01:15:21.370403 | orchestrator | 2026-03-27 01:14:43 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-27 01:15:21.370408 | orchestrator | 2026-03-27 01:14:45 | INFO  | Waiting for image to leave queued state... 2026-03-27 01:15:21.370414 | orchestrator | 2026-03-27 01:14:47 | INFO  | Waiting for import to complete... 2026-03-27 01:15:21.370418 | orchestrator | 2026-03-27 01:14:57 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-27 01:15:21.370423 | orchestrator | 2026-03-27 01:14:58 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-27 01:15:21.370427 | orchestrator | 2026-03-27 01:14:58 | INFO  | Setting internal_version = 0.6.2 2026-03-27 01:15:21.370431 | orchestrator | 2026-03-27 01:14:58 | INFO  | Setting image_original_user = cirros 2026-03-27 01:15:21.370436 | orchestrator | 2026-03-27 01:14:58 | INFO  | Adding tag os:cirros 2026-03-27 01:15:21.370440 | orchestrator | 2026-03-27 01:14:58 | INFO  | Setting property architecture: x86_64 2026-03-27 01:15:21.370444 | orchestrator | 2026-03-27 01:14:58 | INFO  | Setting property hw_disk_bus: scsi 2026-03-27 01:15:21.370448 | orchestrator | 2026-03-27 01:14:58 | INFO  | Setting property hw_rng_model: virtio 2026-03-27 01:15:21.370452 | orchestrator | 2026-03-27 01:14:59 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-27 01:15:21.370456 | orchestrator | 2026-03-27 01:14:59 | INFO  | Setting property hw_watchdog_action: reset 2026-03-27 01:15:21.370460 | orchestrator | 2026-03-27 01:14:59 | INFO  | Setting property hypervisor_type: qemu 2026-03-27 01:15:21.370470 | orchestrator | 2026-03-27 01:14:59 | INFO  | Setting property os_distro: cirros 2026-03-27 01:15:21.370474 | orchestrator | 2026-03-27 01:14:59 | INFO  | Setting property os_purpose: minimal 2026-03-27 01:15:21.370478 | orchestrator | 2026-03-27 01:15:00 | INFO  | Setting property replace_frequency: never 2026-03-27 01:15:21.370482 | orchestrator | 2026-03-27 01:15:00 | INFO  | Setting property uuid_validity: none 2026-03-27 01:15:21.370486 | orchestrator | 2026-03-27 01:15:00 | INFO  | Setting property provided_until: none 2026-03-27 01:15:21.370490 | orchestrator | 2026-03-27 01:15:00 | INFO  | Setting property image_description: Cirros 2026-03-27 01:15:21.370494 | orchestrator | 2026-03-27 01:15:01 | INFO  | Setting property image_name: Cirros 2026-03-27 01:15:21.370513 | orchestrator | 2026-03-27 01:15:01 | INFO  | Setting property internal_version: 0.6.2 2026-03-27 01:15:21.370517 | orchestrator | 2026-03-27 01:15:01 | INFO  | Setting property image_original_user: cirros 2026-03-27 01:15:21.370521 | orchestrator | 2026-03-27 01:15:01 | INFO  | Setting property os_version: 0.6.2 2026-03-27 01:15:21.370525 | orchestrator | 2026-03-27 01:15:02 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-27 01:15:21.370530 | orchestrator | 2026-03-27 01:15:02 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-27 01:15:21.370534 | orchestrator | 2026-03-27 01:15:02 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-27 01:15:21.370538 | orchestrator | 2026-03-27 01:15:02 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-27 01:15:21.370545 | orchestrator | 2026-03-27 01:15:02 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-27 01:15:21.370549 | orchestrator | 2026-03-27 01:15:03 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-27 01:15:21.370552 | orchestrator | 2026-03-27 01:15:03 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-27 01:15:21.370556 | orchestrator | 2026-03-27 01:15:03 | INFO  | Importing image Cirros 0.6.3 2026-03-27 01:15:21.370560 | orchestrator | 2026-03-27 01:15:03 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-27 01:15:21.370564 | orchestrator | 2026-03-27 01:15:03 | INFO  | Waiting for image to leave queued state... 2026-03-27 01:15:21.370571 | orchestrator | 2026-03-27 01:15:05 | INFO  | Waiting for import to complete... 2026-03-27 01:15:21.370591 | orchestrator | 2026-03-27 01:15:15 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-27 01:15:21.370599 | orchestrator | 2026-03-27 01:15:16 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-27 01:15:21.370607 | orchestrator | 2026-03-27 01:15:16 | INFO  | Setting internal_version = 0.6.3 2026-03-27 01:15:21.370612 | orchestrator | 2026-03-27 01:15:16 | INFO  | Setting image_original_user = cirros 2026-03-27 01:15:21.370618 | orchestrator | 2026-03-27 01:15:16 | INFO  | Adding tag os:cirros 2026-03-27 01:15:21.370624 | orchestrator | 2026-03-27 01:15:16 | INFO  | Setting property architecture: x86_64 2026-03-27 01:15:21.370630 | orchestrator | 2026-03-27 01:15:16 | INFO  | Setting property hw_disk_bus: scsi 2026-03-27 01:15:21.370636 | orchestrator | 2026-03-27 01:15:16 | INFO  | Setting property hw_rng_model: virtio 2026-03-27 01:15:21.370641 | orchestrator | 2026-03-27 01:15:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-27 01:15:21.370647 | orchestrator | 2026-03-27 01:15:17 | INFO  | Setting property hw_watchdog_action: reset 2026-03-27 01:15:21.370654 | orchestrator | 2026-03-27 01:15:17 | INFO  | Setting property hypervisor_type: qemu 2026-03-27 01:15:21.370660 | orchestrator | 2026-03-27 01:15:17 | INFO  | Setting property os_distro: cirros 2026-03-27 01:15:21.370666 | orchestrator | 2026-03-27 01:15:17 | INFO  | Setting property os_purpose: minimal 2026-03-27 01:15:21.370673 | orchestrator | 2026-03-27 01:15:17 | INFO  | Setting property replace_frequency: never 2026-03-27 01:15:21.370680 | orchestrator | 2026-03-27 01:15:18 | INFO  | Setting property uuid_validity: none 2026-03-27 01:15:21.370686 | orchestrator | 2026-03-27 01:15:18 | INFO  | Setting property provided_until: none 2026-03-27 01:15:21.370693 | orchestrator | 2026-03-27 01:15:18 | INFO  | Setting property image_description: Cirros 2026-03-27 01:15:21.370706 | orchestrator | 2026-03-27 01:15:18 | INFO  | Setting property image_name: Cirros 2026-03-27 01:15:21.370712 | orchestrator | 2026-03-27 01:15:19 | INFO  | Setting property internal_version: 0.6.3 2026-03-27 01:15:21.370718 | orchestrator | 2026-03-27 01:15:19 | INFO  | Setting property image_original_user: cirros 2026-03-27 01:15:21.370725 | orchestrator | 2026-03-27 01:15:19 | INFO  | Setting property os_version: 0.6.3 2026-03-27 01:15:21.370731 | orchestrator | 2026-03-27 01:15:19 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-27 01:15:21.370738 | orchestrator | 2026-03-27 01:15:20 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-27 01:15:21.370745 | orchestrator | 2026-03-27 01:15:20 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-27 01:15:21.370751 | orchestrator | 2026-03-27 01:15:20 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-27 01:15:21.370758 | orchestrator | 2026-03-27 01:15:20 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-27 01:15:21.603453 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-27 01:15:23.561146 | orchestrator | 2026-03-27 01:15:23 | INFO  | date: 2026-03-26 2026-03-27 01:15:23.561226 | orchestrator | 2026-03-27 01:15:23 | INFO  | image: octavia-amphora-haproxy-2024.2.20260326.qcow2 2026-03-27 01:15:23.561289 | orchestrator | 2026-03-27 01:15:23 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2 2026-03-27 01:15:23.561316 | orchestrator | 2026-03-27 01:15:23 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2.CHECKSUM 2026-03-27 01:15:23.790216 | orchestrator | 2026-03-27 01:15:23 | INFO  | checksum: 95322c96815879973a320f11cf6c9ad6237f8791183c852e0c8319e08839b1ac 2026-03-27 01:15:23.875644 | orchestrator | 2026-03-27 01:15:23 | INFO  | It takes a moment until task 3b1885bb-936e-4e06-af50-3a308e2283ec (image-manager) has been started and output is visible here. 2026-03-27 01:16:26.273658 | orchestrator | 2026-03-27 01:15:26 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-26' 2026-03-27 01:16:26.273759 | orchestrator | 2026-03-27 01:15:26 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2: 200 2026-03-27 01:16:26.273771 | orchestrator | 2026-03-27 01:15:26 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-26 2026-03-27 01:16:26.273778 | orchestrator | 2026-03-27 01:15:26 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2 2026-03-27 01:16:26.273786 | orchestrator | 2026-03-27 01:15:28 | INFO  | Waiting for image to leave queued state... 2026-03-27 01:16:26.273793 | orchestrator | 2026-03-27 01:15:30 | INFO  | Waiting for import to complete... 2026-03-27 01:16:26.273800 | orchestrator | 2026-03-27 01:15:40 | INFO  | Waiting for import to complete... 2026-03-27 01:16:26.273807 | orchestrator | 2026-03-27 01:15:50 | INFO  | Waiting for import to complete... 2026-03-27 01:16:26.273814 | orchestrator | 2026-03-27 01:16:00 | INFO  | Waiting for import to complete... 2026-03-27 01:16:26.273823 | orchestrator | 2026-03-27 01:16:10 | INFO  | Waiting for import to complete... 2026-03-27 01:16:26.273830 | orchestrator | 2026-03-27 01:16:21 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-26' successfully completed, reloading images 2026-03-27 01:16:26.273853 | orchestrator | 2026-03-27 01:16:21 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-26' 2026-03-27 01:16:26.273861 | orchestrator | 2026-03-27 01:16:21 | INFO  | Setting internal_version = 2026-03-26 2026-03-27 01:16:26.273866 | orchestrator | 2026-03-27 01:16:21 | INFO  | Setting image_original_user = ubuntu 2026-03-27 01:16:26.273870 | orchestrator | 2026-03-27 01:16:21 | INFO  | Adding tag amphora 2026-03-27 01:16:26.273874 | orchestrator | 2026-03-27 01:16:21 | INFO  | Adding tag os:ubuntu 2026-03-27 01:16:26.273878 | orchestrator | 2026-03-27 01:16:22 | INFO  | Setting property architecture: x86_64 2026-03-27 01:16:26.273881 | orchestrator | 2026-03-27 01:16:22 | INFO  | Setting property hw_disk_bus: scsi 2026-03-27 01:16:26.273885 | orchestrator | 2026-03-27 01:16:22 | INFO  | Setting property hw_rng_model: virtio 2026-03-27 01:16:26.273889 | orchestrator | 2026-03-27 01:16:22 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-27 01:16:26.273893 | orchestrator | 2026-03-27 01:16:23 | INFO  | Setting property hw_watchdog_action: reset 2026-03-27 01:16:26.273897 | orchestrator | 2026-03-27 01:16:23 | INFO  | Setting property hypervisor_type: qemu 2026-03-27 01:16:26.273901 | orchestrator | 2026-03-27 01:16:23 | INFO  | Setting property os_distro: ubuntu 2026-03-27 01:16:26.273905 | orchestrator | 2026-03-27 01:16:23 | INFO  | Setting property replace_frequency: quarterly 2026-03-27 01:16:26.273909 | orchestrator | 2026-03-27 01:16:24 | INFO  | Setting property uuid_validity: last-1 2026-03-27 01:16:26.273912 | orchestrator | 2026-03-27 01:16:24 | INFO  | Setting property provided_until: none 2026-03-27 01:16:26.273916 | orchestrator | 2026-03-27 01:16:24 | INFO  | Setting property os_purpose: network 2026-03-27 01:16:26.273920 | orchestrator | 2026-03-27 01:16:24 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-27 01:16:26.273930 | orchestrator | 2026-03-27 01:16:24 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-27 01:16:26.273935 | orchestrator | 2026-03-27 01:16:25 | INFO  | Setting property internal_version: 2026-03-26 2026-03-27 01:16:26.273938 | orchestrator | 2026-03-27 01:16:25 | INFO  | Setting property image_original_user: ubuntu 2026-03-27 01:16:26.273942 | orchestrator | 2026-03-27 01:16:25 | INFO  | Setting property os_version: 2026-03-26 2026-03-27 01:16:26.273946 | orchestrator | 2026-03-27 01:16:25 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2 2026-03-27 01:16:26.273950 | orchestrator | 2026-03-27 01:16:25 | INFO  | Setting property image_build_date: 2026-03-26 2026-03-27 01:16:26.273954 | orchestrator | 2026-03-27 01:16:25 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-26' 2026-03-27 01:16:26.273958 | orchestrator | 2026-03-27 01:16:25 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-26' 2026-03-27 01:16:26.273961 | orchestrator | 2026-03-27 01:16:26 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-27 01:16:26.273973 | orchestrator | 2026-03-27 01:16:26 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-27 01:16:26.273977 | orchestrator | 2026-03-27 01:16:26 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-27 01:16:26.273981 | orchestrator | 2026-03-27 01:16:26 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-27 01:16:26.537988 | orchestrator | ok: Runtime: 0:03:00.725487 2026-03-27 01:16:26.551432 | 2026-03-27 01:16:26.551544 | TASK [Run checks] 2026-03-27 01:16:27.220205 | orchestrator | + set -e 2026-03-27 01:16:27.220324 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-27 01:16:27.220335 | orchestrator | ++ export INTERACTIVE=false 2026-03-27 01:16:27.220345 | orchestrator | ++ INTERACTIVE=false 2026-03-27 01:16:27.220351 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-27 01:16:27.220356 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-27 01:16:27.220367 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-27 01:16:27.221498 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-27 01:16:27.227358 | orchestrator | 2026-03-27 01:16:27.227412 | orchestrator | # CHECK 2026-03-27 01:16:27.227418 | orchestrator | 2026-03-27 01:16:27.227422 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-27 01:16:27.227429 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-27 01:16:27.227434 | orchestrator | + echo 2026-03-27 01:16:27.227438 | orchestrator | + echo '# CHECK' 2026-03-27 01:16:27.227442 | orchestrator | + echo 2026-03-27 01:16:27.227452 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-27 01:16:27.228266 | orchestrator | ++ semver latest 5.0.0 2026-03-27 01:16:27.278289 | orchestrator | 2026-03-27 01:16:27.278342 | orchestrator | ## Containers @ testbed-manager 2026-03-27 01:16:27.278350 | orchestrator | 2026-03-27 01:16:27.278358 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-27 01:16:27.278364 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-27 01:16:27.278371 | orchestrator | + echo 2026-03-27 01:16:27.278378 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-27 01:16:27.278385 | orchestrator | + echo 2026-03-27 01:16:27.278391 | orchestrator | + osism container testbed-manager ps 2026-03-27 01:16:28.247773 | orchestrator | 2026-03-27 01:16:28 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-27 01:16:28.623112 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-27 01:16:28.623190 | orchestrator | cdc84092ed3d registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_blackbox_exporter 2026-03-27 01:16:28.623212 | orchestrator | b7c93b117a93 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_alertmanager 2026-03-27 01:16:28.623223 | orchestrator | c5d555df7755 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-03-27 01:16:28.623230 | orchestrator | 4b8f1a5ae09d registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-03-27 01:16:28.623240 | orchestrator | ae649772af87 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2026-03-27 01:16:28.623247 | orchestrator | 782ecf243e1e registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2026-03-27 01:16:28.623254 | orchestrator | 705460416d10 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-27 01:16:28.623261 | orchestrator | 2f6c2ebccbae registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2026-03-27 01:16:28.623592 | orchestrator | 6f7a9cf8da31 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-27 01:16:28.623607 | orchestrator | 194e05b342bf phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2026-03-27 01:16:28.623613 | orchestrator | a088ca6b9e22 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 30 minutes openstackclient 2026-03-27 01:16:28.623618 | orchestrator | f1ffc067b1bf registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 30 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2026-03-27 01:16:28.623623 | orchestrator | 50c23767dd3f registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-27 01:16:28.623627 | orchestrator | d8719f220528 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 57 minutes ago Up 36 minutes (healthy) manager-inventory_reconciler-1 2026-03-27 01:16:28.623634 | orchestrator | c1dc35748bb1 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) osism-ansible 2026-03-27 01:16:28.623642 | orchestrator | cd1efd0e7516 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) ceph-ansible 2026-03-27 01:16:28.623654 | orchestrator | 37262d2af170 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) osism-kubernetes 2026-03-27 01:16:28.623662 | orchestrator | 8bcb0a93ef16 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) kolla-ansible 2026-03-27 01:16:28.623681 | orchestrator | b6df56113c26 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 57 minutes ago Up 37 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-27 01:16:28.623689 | orchestrator | e65b8d5534bd registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-flower-1 2026-03-27 01:16:28.623697 | orchestrator | d9422dc70cbc registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 57 minutes ago Up 37 minutes (healthy) osismclient 2026-03-27 01:16:28.623705 | orchestrator | 4b9ced6a78ad registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 57 minutes ago Up 37 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-27 01:16:28.623712 | orchestrator | 33e1aa2c64cf registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-27 01:16:28.623723 | orchestrator | e45f1d447207 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-beat-1 2026-03-27 01:16:28.623727 | orchestrator | feb4fac49926 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-27 01:16:28.623740 | orchestrator | 8250c8fbf1e1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2026-03-27 01:16:28.623747 | orchestrator | 8249cfb6fe3b registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-openstack-1 2026-03-27 01:16:28.623754 | orchestrator | 7a62cc4f3c9e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-listener-1 2026-03-27 01:16:28.623761 | orchestrator | 84f85492ab09 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-27 01:16:28.726757 | orchestrator | 2026-03-27 01:16:28.726842 | orchestrator | ## Images @ testbed-manager 2026-03-27 01:16:28.726854 | orchestrator | 2026-03-27 01:16:28.726863 | orchestrator | + echo 2026-03-27 01:16:28.726871 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-27 01:16:28.726879 | orchestrator | + echo 2026-03-27 01:16:28.726889 | orchestrator | + osism container testbed-manager images 2026-03-27 01:16:30.073889 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-27 01:16:30.073953 | orchestrator | registry.osism.tech/osism/osism-ansible latest 1f15c6a1da95 About an hour ago 634MB 2026-03-27 01:16:30.073961 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 80ec9d4e39f2 About an hour ago 635MB 2026-03-27 01:16:30.073967 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 3b4dd019f326 About an hour ago 585MB 2026-03-27 01:16:30.073974 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 5da6b51d3b9f About an hour ago 1.24GB 2026-03-27 01:16:30.073996 | orchestrator | registry.osism.tech/osism/osism latest 4bf998e3251c About an hour ago 409MB 2026-03-27 01:16:30.074002 | orchestrator | registry.osism.tech/osism/osism-frontend latest 5c9cd7f8aed9 About an hour ago 212MB 2026-03-27 01:16:30.074008 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest ff714b1cf250 About an hour ago 357MB 2026-03-27 01:16:30.074055 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 b6d0c3769138 21 hours ago 239MB 2026-03-27 01:16:30.074062 | orchestrator | registry.osism.tech/osism/cephclient reef 2de9df6dbfef 21 hours ago 453MB 2026-03-27 01:16:30.074068 | orchestrator | registry.osism.tech/kolla/cron 2024.2 c9efd58c29d6 23 hours ago 277MB 2026-03-27 01:16:30.074074 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2a11b5cabe47 23 hours ago 590MB 2026-03-27 01:16:30.074080 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 4b35b95033e8 23 hours ago 679MB 2026-03-27 01:16:30.074086 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 d5a9dab69519 23 hours ago 850MB 2026-03-27 01:16:30.074093 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 d1443ec30884 23 hours ago 368MB 2026-03-27 01:16:30.074112 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 66153feb3e4f 23 hours ago 317MB 2026-03-27 01:16:30.074119 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 f3eb4eac593f 23 hours ago 415MB 2026-03-27 01:16:30.074124 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 88eed1b8b5c2 23 hours ago 319MB 2026-03-27 01:16:30.074130 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 weeks ago 41.4MB 2026-03-27 01:16:30.074136 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-27 01:16:30.074142 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-27 01:16:30.074148 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-27 01:16:30.074154 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-27 01:16:30.074160 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-27 01:16:30.074166 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-27 01:16:30.174938 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-27 01:16:30.176086 | orchestrator | ++ semver latest 5.0.0 2026-03-27 01:16:30.215738 | orchestrator | 2026-03-27 01:16:30.215802 | orchestrator | ## Containers @ testbed-node-0 2026-03-27 01:16:30.215813 | orchestrator | 2026-03-27 01:16:30.215817 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-27 01:16:30.215821 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-27 01:16:30.215825 | orchestrator | + echo 2026-03-27 01:16:30.215830 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-27 01:16:30.215834 | orchestrator | + echo 2026-03-27 01:16:30.215839 | orchestrator | + osism container testbed-node-0 ps 2026-03-27 01:16:31.504414 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-27 01:16:31.504484 | orchestrator | 2216720409c3 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-27 01:16:31.504496 | orchestrator | 91d2f5585d84 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-27 01:16:31.504504 | orchestrator | 0f0d0fe7772f registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-27 01:16:31.504511 | orchestrator | 11872c486f44 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-27 01:16:31.504517 | orchestrator | 07c12199beb4 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-27 01:16:31.504524 | orchestrator | d4063c0ff774 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-03-27 01:16:31.504531 | orchestrator | 9d4b814595e9 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-03-27 01:16:31.504550 | orchestrator | e515fc663152 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-03-27 01:16:31.504554 | orchestrator | 910f781faa60 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-03-27 01:16:31.504568 | orchestrator | ea51c84d23a1 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-03-27 01:16:31.504572 | orchestrator | 4551d5ab1b61 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-03-27 01:16:31.504576 | orchestrator | 40ee09b1cef1 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2026-03-27 01:16:31.504580 | orchestrator | 00b66bd65c2b registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-03-27 01:16:31.504584 | orchestrator | d94346c5e15e registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2026-03-27 01:16:31.504588 | orchestrator | 001dd3f4d7f7 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-03-27 01:16:31.504591 | orchestrator | 476cc9fbbb78 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-03-27 01:16:31.504734 | orchestrator | 1043035ce426 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-03-27 01:16:31.504743 | orchestrator | c279a04c1c79 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2026-03-27 01:16:31.504747 | orchestrator | d0b18256dbbe registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-03-27 01:16:31.504751 | orchestrator | 69a993a8d7e4 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-03-27 01:16:31.504757 | orchestrator | 76f6923dd85b registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-03-27 01:16:31.504766 | orchestrator | 13af210353ea registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-03-27 01:16:31.504773 | orchestrator | 5593a75ae770 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-03-27 01:16:31.504779 | orchestrator | 8348007bace9 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-03-27 01:16:31.504786 | orchestrator | aa62c69be74e registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-03-27 01:16:31.504794 | orchestrator | e42150a848d1 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-03-27 01:16:31.504800 | orchestrator | cce00d692f4c registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-03-27 01:16:31.504807 | orchestrator | c002f6f28466 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-03-27 01:16:31.504818 | orchestrator | 6be836e1e780 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-03-27 01:16:31.504829 | orchestrator | 03f5da9c38d8 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-03-27 01:16:31.504833 | orchestrator | 83546bc46830 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2026-03-27 01:16:31.504837 | orchestrator | 3ab1dad35a0e registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-03-27 01:16:31.504841 | orchestrator | 77d6dc355e2f registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-03-27 01:16:31.504845 | orchestrator | a47a4ee1699a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2026-03-27 01:16:31.504848 | orchestrator | 7a734596c816 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-03-27 01:16:31.504852 | orchestrator | 443c2c275779 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-03-27 01:16:31.504856 | orchestrator | 72a3a482435c registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-03-27 01:16:31.504859 | orchestrator | 9c35d2389072 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-03-27 01:16:31.504863 | orchestrator | 7c24c9704837 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-03-27 01:16:31.504874 | orchestrator | 9c68d769b1bb registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-03-27 01:16:31.504878 | orchestrator | b06ab6140b9f registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-03-27 01:16:31.504882 | orchestrator | a5f84c5544a3 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2026-03-27 01:16:31.504889 | orchestrator | 1ba49b31fe78 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-27 01:16:31.504905 | orchestrator | dc405c56f653 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-03-27 01:16:31.504912 | orchestrator | a7b15c4d7462 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-03-27 01:16:31.504918 | orchestrator | 49bfe4df3d96 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2026-03-27 01:16:31.504925 | orchestrator | fc744b22ad86 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2026-03-27 01:16:31.504936 | orchestrator | ec815530c47d registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2026-03-27 01:16:31.504948 | orchestrator | 7e634cdc2c9d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2026-03-27 01:16:31.504952 | orchestrator | 32a92d2baf80 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2026-03-27 01:16:31.504956 | orchestrator | 59d16a156f12 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-03-27 01:16:31.504960 | orchestrator | 8fc815548e56 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-27 01:16:31.504963 | orchestrator | 31b37dae89cf registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-03-27 01:16:31.504967 | orchestrator | 3ec86976590a registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-03-27 01:16:31.504974 | orchestrator | 5817fd26e956 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-03-27 01:16:31.504978 | orchestrator | 5ff7c0cc5e6a registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-03-27 01:16:31.504982 | orchestrator | c27ccae1202f registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes cron 2026-03-27 01:16:31.504985 | orchestrator | 77c3a2f1ba5c registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2026-03-27 01:16:31.504989 | orchestrator | a5cf86a2b2e6 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes fluentd 2026-03-27 01:16:31.613872 | orchestrator | 2026-03-27 01:16:31.613930 | orchestrator | ## Images @ testbed-node-0 2026-03-27 01:16:31.613939 | orchestrator | 2026-03-27 01:16:31.613946 | orchestrator | + echo 2026-03-27 01:16:31.613953 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-27 01:16:31.613961 | orchestrator | + echo 2026-03-27 01:16:31.613968 | orchestrator | + osism container testbed-node-0 images 2026-03-27 01:16:32.938114 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-27 01:16:32.938170 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 51d59d2a41b6 21 hours ago 1.35GB 2026-03-27 01:16:32.938177 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 4a1ba63d8d47 23 hours ago 1.57GB 2026-03-27 01:16:32.938183 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 fa16f377b9ca 23 hours ago 1.54GB 2026-03-27 01:16:32.938187 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a4acb9fc910d 23 hours ago 287MB 2026-03-27 01:16:32.938191 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 3898c8190568 23 hours ago 285MB 2026-03-27 01:16:32.938196 | orchestrator | registry.osism.tech/kolla/cron 2024.2 c9efd58c29d6 23 hours ago 277MB 2026-03-27 01:16:32.938201 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 dbdbc1ba9592 23 hours ago 1.04GB 2026-03-27 01:16:32.938205 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 84283d009f42 23 hours ago 333MB 2026-03-27 01:16:32.938209 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2a11b5cabe47 23 hours ago 590MB 2026-03-27 01:16:32.938214 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 179e517dde06 23 hours ago 277MB 2026-03-27 01:16:32.938228 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 4b35b95033e8 23 hours ago 679MB 2026-03-27 01:16:32.938233 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 816f1f168f16 23 hours ago 427MB 2026-03-27 01:16:32.938237 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4a95d40d15b2 23 hours ago 309MB 2026-03-27 01:16:32.938241 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 d1443ec30884 23 hours ago 368MB 2026-03-27 01:16:32.938246 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 66153feb3e4f 23 hours ago 317MB 2026-03-27 01:16:32.938250 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 178e99f5361e 23 hours ago 303MB 2026-03-27 01:16:32.938262 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 e4410dbd0ff9 23 hours ago 312MB 2026-03-27 01:16:32.938267 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 bfb815d62a5f 23 hours ago 463MB 2026-03-27 01:16:32.938271 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 3c9db6188352 23 hours ago 1.16GB 2026-03-27 01:16:32.938275 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 6d956e3a25b0 23 hours ago 284MB 2026-03-27 01:16:32.938280 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 25ce59cb13c2 23 hours ago 290MB 2026-03-27 01:16:32.938284 | orchestrator | registry.osism.tech/kolla/redis 2024.2 530f5662802f 23 hours ago 284MB 2026-03-27 01:16:32.938288 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 1b4fdd3229f9 23 hours ago 290MB 2026-03-27 01:16:32.938293 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 fef7c7f77cfc 23 hours ago 1.14GB 2026-03-27 01:16:32.938297 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 91cbb555433e 23 hours ago 1.25GB 2026-03-27 01:16:32.938301 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 30104f43e30a 23 hours ago 1.04GB 2026-03-27 01:16:32.938306 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 05a5205fe5b4 23 hours ago 1.06GB 2026-03-27 01:16:32.938310 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 e9e673493202 23 hours ago 1.04GB 2026-03-27 01:16:32.938314 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3fc93f95995d 23 hours ago 1.06GB 2026-03-27 01:16:32.938318 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 13e09bb4aa5f 23 hours ago 1.04GB 2026-03-27 01:16:32.938323 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 1e1953c447ea 23 hours ago 1GB 2026-03-27 01:16:32.938327 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 bf1d8111b5a8 23 hours ago 1GB 2026-03-27 01:16:32.938331 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 937c8ef2a0bc 23 hours ago 1GB 2026-03-27 01:16:32.938336 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 3d53e89b1b38 23 hours ago 1.11GB 2026-03-27 01:16:32.938340 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 a7f3e54867bb 23 hours ago 985MB 2026-03-27 01:16:32.938344 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 75a5e76c37d0 23 hours ago 985MB 2026-03-27 01:16:32.938359 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 5ec1e3d6fedc 23 hours ago 984MB 2026-03-27 01:16:32.938364 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 62d413652391 23 hours ago 985MB 2026-03-27 01:16:32.938368 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 72b93fc3ec6c 23 hours ago 1.42GB 2026-03-27 01:16:32.938372 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 7bfe8c737cd7 23 hours ago 1.42GB 2026-03-27 01:16:32.938380 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 c7fb0ab163d1 23 hours ago 1.73GB 2026-03-27 01:16:32.938384 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 a8e1e1f3b029 23 hours ago 1.42GB 2026-03-27 01:16:32.938389 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 21651d3f361f 23 hours ago 1.17GB 2026-03-27 01:16:32.938393 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8081f4304ddf 23 hours ago 986MB 2026-03-27 01:16:32.938397 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 58dee68c3cca 23 hours ago 1.05GB 2026-03-27 01:16:32.938404 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0f5641e0ca65 23 hours ago 1.08GB 2026-03-27 01:16:32.938408 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 668fe6d9360b 23 hours ago 1.05GB 2026-03-27 01:16:32.938412 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 ea42bd00c609 23 hours ago 1GB 2026-03-27 01:16:32.938417 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 c59485d174e4 23 hours ago 1.06GB 2026-03-27 01:16:32.938421 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 13892a56c8cc 23 hours ago 987MB 2026-03-27 01:16:32.938425 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 d7a73180c53f 23 hours ago 987MB 2026-03-27 01:16:32.938429 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 09d9c8072f6c 23 hours ago 1e+03MB 2026-03-27 01:16:32.938433 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 55856ab7d868 23 hours ago 995MB 2026-03-27 01:16:32.938438 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d14bcb1ad2fe 23 hours ago 1e+03MB 2026-03-27 01:16:32.938442 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 0b2a9e3c42ee 23 hours ago 995MB 2026-03-27 01:16:32.938446 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0880264a512d 23 hours ago 995MB 2026-03-27 01:16:32.938451 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 254f09a86c1e 23 hours ago 994MB 2026-03-27 01:16:32.938455 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 e006f47563bc 23 hours ago 1.22GB 2026-03-27 01:16:32.938459 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 182635bf5d09 23 hours ago 1.22GB 2026-03-27 01:16:32.938463 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 5c5b163c1f6c 23 hours ago 1.38GB 2026-03-27 01:16:32.938468 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 09c703a068c0 23 hours ago 1.22GB 2026-03-27 01:16:32.938472 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 f431e1de7e75 23 hours ago 851MB 2026-03-27 01:16:32.938476 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 47a643f43ce7 23 hours ago 851MB 2026-03-27 01:16:32.938480 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 acd90303a99f 23 hours ago 851MB 2026-03-27 01:16:32.938485 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 86e689ff7291 23 hours ago 851MB 2026-03-27 01:16:33.038938 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-27 01:16:33.039572 | orchestrator | ++ semver latest 5.0.0 2026-03-27 01:16:33.087861 | orchestrator | 2026-03-27 01:16:33.087920 | orchestrator | ## Containers @ testbed-node-1 2026-03-27 01:16:33.087928 | orchestrator | 2026-03-27 01:16:33.087935 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-27 01:16:33.087941 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-27 01:16:33.087947 | orchestrator | + echo 2026-03-27 01:16:33.087954 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-27 01:16:33.087960 | orchestrator | + echo 2026-03-27 01:16:33.087965 | orchestrator | + osism container testbed-node-1 ps 2026-03-27 01:16:34.361555 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-27 01:16:34.361612 | orchestrator | e4d2da05493e registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-27 01:16:34.361621 | orchestrator | 66d07f7b5d0b registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-27 01:16:34.361628 | orchestrator | f743476c8420 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-27 01:16:34.361634 | orchestrator | 65abf203e4c0 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-27 01:16:34.361641 | orchestrator | ce0a9d24614e registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-27 01:16:34.361652 | orchestrator | a6df85310b07 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2026-03-27 01:16:34.361685 | orchestrator | 1fa90633dde2 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-03-27 01:16:34.361692 | orchestrator | 98592a149141 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-03-27 01:16:34.361701 | orchestrator | ce3e74d9ae8a registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-27 01:16:34.361707 | orchestrator | 27a2e7e2bfd8 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-03-27 01:16:34.361714 | orchestrator | 937206b98595 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-03-27 01:16:34.361721 | orchestrator | 5dd737319ac8 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-03-27 01:16:34.361728 | orchestrator | 08d235932310 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2026-03-27 01:16:34.361740 | orchestrator | 3050bab4c839 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2026-03-27 01:16:34.361756 | orchestrator | 4aa55135e99c registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) cinder_api 2026-03-27 01:16:34.361764 | orchestrator | f66f8373ec81 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-03-27 01:16:34.361770 | orchestrator | 1fbbb2b5c134 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-03-27 01:16:34.361777 | orchestrator | 4c2678d0c0c3 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2026-03-27 01:16:34.361782 | orchestrator | 6beb661b4b4b registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-03-27 01:16:34.361800 | orchestrator | 1202b9ddf297 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-03-27 01:16:34.361806 | orchestrator | cbbd203507b0 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-03-27 01:16:34.361825 | orchestrator | d61987310ced registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-03-27 01:16:34.361832 | orchestrator | d8515c152ead registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-03-27 01:16:34.361838 | orchestrator | e04268f58b63 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-03-27 01:16:34.361845 | orchestrator | ae40a79d1d36 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-03-27 01:16:34.361851 | orchestrator | ce63f77a2dfc registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-03-27 01:16:34.361858 | orchestrator | 3c13aeb97fe5 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-03-27 01:16:34.361866 | orchestrator | a131fa82b89b registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-03-27 01:16:34.361870 | orchestrator | a5d0cecff79d registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-03-27 01:16:34.361873 | orchestrator | b80f09280b2d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-03-27 01:16:34.361877 | orchestrator | 774e139f2633 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2026-03-27 01:16:34.361881 | orchestrator | 8fdb13b9afce registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-03-27 01:16:34.361894 | orchestrator | 35c2bb6c1fb7 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-03-27 01:16:34.361904 | orchestrator | ab04b43daeab registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2026-03-27 01:16:34.361911 | orchestrator | 757a6530f203 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-03-27 01:16:34.361920 | orchestrator | 61c39db5963a registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-03-27 01:16:34.361926 | orchestrator | a870a18a0d00 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-03-27 01:16:34.361932 | orchestrator | 2e91794a7f9b registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-03-27 01:16:34.361938 | orchestrator | 86ac5cf2e027 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-03-27 01:16:34.361949 | orchestrator | bbcc2ba4a837 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-03-27 01:16:34.361953 | orchestrator | 40c5f46e98d4 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-03-27 01:16:34.361957 | orchestrator | 357d7a0cd5e0 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2026-03-27 01:16:34.361961 | orchestrator | e11e38221a5b registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-27 01:16:34.361965 | orchestrator | c853eebab049 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 22 minutes (healthy) proxysql 2026-03-27 01:16:34.361972 | orchestrator | 7a7eca6148de registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-03-27 01:16:34.361976 | orchestrator | 5658cf494952 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2026-03-27 01:16:34.361980 | orchestrator | d22387193d8f registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2026-03-27 01:16:34.361983 | orchestrator | 94f142383f2c registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2026-03-27 01:16:34.361987 | orchestrator | fb710f894dfc registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2026-03-27 01:16:34.361991 | orchestrator | 50f3a64e2f05 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-03-27 01:16:34.361994 | orchestrator | 9c900c0fc643 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-03-27 01:16:34.361998 | orchestrator | 0e9e5349cbae registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-27 01:16:34.362005 | orchestrator | e38669d3a96a registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-03-27 01:16:34.362008 | orchestrator | 36450aa2f331 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-03-27 01:16:34.362048 | orchestrator | 1581fd287545 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-03-27 01:16:34.362053 | orchestrator | 90c0ed37d3ad registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-03-27 01:16:34.362057 | orchestrator | 17da5153ed99 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2026-03-27 01:16:34.362061 | orchestrator | 2a093816a7b2 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2026-03-27 01:16:34.362074 | orchestrator | 7405cd97dc79 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-27 01:16:34.460047 | orchestrator | 2026-03-27 01:16:34.460095 | orchestrator | ## Images @ testbed-node-1 2026-03-27 01:16:34.460101 | orchestrator | 2026-03-27 01:16:34.460105 | orchestrator | + echo 2026-03-27 01:16:34.460110 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-27 01:16:34.460114 | orchestrator | + echo 2026-03-27 01:16:34.460118 | orchestrator | + osism container testbed-node-1 images 2026-03-27 01:16:35.726085 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-27 01:16:35.726160 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 51d59d2a41b6 21 hours ago 1.35GB 2026-03-27 01:16:35.726168 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 4a1ba63d8d47 23 hours ago 1.57GB 2026-03-27 01:16:35.726172 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 fa16f377b9ca 23 hours ago 1.54GB 2026-03-27 01:16:35.726176 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a4acb9fc910d 23 hours ago 287MB 2026-03-27 01:16:35.726180 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 3898c8190568 23 hours ago 285MB 2026-03-27 01:16:35.726183 | orchestrator | registry.osism.tech/kolla/cron 2024.2 c9efd58c29d6 23 hours ago 277MB 2026-03-27 01:16:35.726187 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 dbdbc1ba9592 23 hours ago 1.04GB 2026-03-27 01:16:35.726191 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 84283d009f42 23 hours ago 333MB 2026-03-27 01:16:35.726194 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2a11b5cabe47 23 hours ago 590MB 2026-03-27 01:16:35.726198 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 179e517dde06 23 hours ago 277MB 2026-03-27 01:16:35.726202 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 4b35b95033e8 23 hours ago 679MB 2026-03-27 01:16:35.726206 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 816f1f168f16 23 hours ago 427MB 2026-03-27 01:16:35.726210 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4a95d40d15b2 23 hours ago 309MB 2026-03-27 01:16:35.726213 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 d1443ec30884 23 hours ago 368MB 2026-03-27 01:16:35.726217 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 66153feb3e4f 23 hours ago 317MB 2026-03-27 01:16:35.726221 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 178e99f5361e 23 hours ago 303MB 2026-03-27 01:16:35.726224 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 e4410dbd0ff9 23 hours ago 312MB 2026-03-27 01:16:35.726228 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 bfb815d62a5f 23 hours ago 463MB 2026-03-27 01:16:35.726232 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 3c9db6188352 23 hours ago 1.16GB 2026-03-27 01:16:35.726236 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 6d956e3a25b0 23 hours ago 284MB 2026-03-27 01:16:35.726239 | orchestrator | registry.osism.tech/kolla/redis 2024.2 530f5662802f 23 hours ago 284MB 2026-03-27 01:16:35.726243 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 25ce59cb13c2 23 hours ago 290MB 2026-03-27 01:16:35.726247 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 1b4fdd3229f9 23 hours ago 290MB 2026-03-27 01:16:35.726250 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 fef7c7f77cfc 23 hours ago 1.14GB 2026-03-27 01:16:35.726254 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 91cbb555433e 23 hours ago 1.25GB 2026-03-27 01:16:35.726270 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 30104f43e30a 23 hours ago 1.04GB 2026-03-27 01:16:35.726274 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 05a5205fe5b4 23 hours ago 1.06GB 2026-03-27 01:16:35.726278 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 e9e673493202 23 hours ago 1.04GB 2026-03-27 01:16:35.726282 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3fc93f95995d 23 hours ago 1.06GB 2026-03-27 01:16:35.726285 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 13e09bb4aa5f 23 hours ago 1.04GB 2026-03-27 01:16:35.726289 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 1e1953c447ea 23 hours ago 1GB 2026-03-27 01:16:35.726293 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 bf1d8111b5a8 23 hours ago 1GB 2026-03-27 01:16:35.726297 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 937c8ef2a0bc 23 hours ago 1GB 2026-03-27 01:16:35.726309 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 3d53e89b1b38 23 hours ago 1.11GB 2026-03-27 01:16:35.726313 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 72b93fc3ec6c 23 hours ago 1.42GB 2026-03-27 01:16:35.726316 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 7bfe8c737cd7 23 hours ago 1.42GB 2026-03-27 01:16:35.726329 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 c7fb0ab163d1 23 hours ago 1.73GB 2026-03-27 01:16:35.726333 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 a8e1e1f3b029 23 hours ago 1.42GB 2026-03-27 01:16:35.726337 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 21651d3f361f 23 hours ago 1.17GB 2026-03-27 01:16:35.726341 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8081f4304ddf 23 hours ago 986MB 2026-03-27 01:16:35.726344 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 58dee68c3cca 23 hours ago 1.05GB 2026-03-27 01:16:35.726348 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0f5641e0ca65 23 hours ago 1.08GB 2026-03-27 01:16:35.726352 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 668fe6d9360b 23 hours ago 1.05GB 2026-03-27 01:16:35.726356 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 09d9c8072f6c 23 hours ago 1e+03MB 2026-03-27 01:16:35.726359 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 55856ab7d868 23 hours ago 995MB 2026-03-27 01:16:35.726363 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d14bcb1ad2fe 23 hours ago 1e+03MB 2026-03-27 01:16:35.726367 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 0b2a9e3c42ee 23 hours ago 995MB 2026-03-27 01:16:35.726370 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0880264a512d 23 hours ago 995MB 2026-03-27 01:16:35.726374 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 254f09a86c1e 23 hours ago 994MB 2026-03-27 01:16:35.726378 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 e006f47563bc 23 hours ago 1.22GB 2026-03-27 01:16:35.726381 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 182635bf5d09 23 hours ago 1.22GB 2026-03-27 01:16:35.726385 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 5c5b163c1f6c 23 hours ago 1.38GB 2026-03-27 01:16:35.726389 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 09c703a068c0 23 hours ago 1.22GB 2026-03-27 01:16:35.726392 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 f431e1de7e75 23 hours ago 851MB 2026-03-27 01:16:35.726396 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 47a643f43ce7 23 hours ago 851MB 2026-03-27 01:16:35.726403 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 acd90303a99f 23 hours ago 851MB 2026-03-27 01:16:35.726407 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 86e689ff7291 23 hours ago 851MB 2026-03-27 01:16:35.828120 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-27 01:16:35.828461 | orchestrator | ++ semver latest 5.0.0 2026-03-27 01:16:35.887417 | orchestrator | 2026-03-27 01:16:35.887472 | orchestrator | ## Containers @ testbed-node-2 2026-03-27 01:16:35.887482 | orchestrator | 2026-03-27 01:16:35.887489 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-27 01:16:35.887496 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-27 01:16:35.887502 | orchestrator | + echo 2026-03-27 01:16:35.887508 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-27 01:16:35.887515 | orchestrator | + echo 2026-03-27 01:16:35.887522 | orchestrator | + osism container testbed-node-2 ps 2026-03-27 01:16:37.269246 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-27 01:16:37.269299 | orchestrator | cf80302a3832 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-27 01:16:37.269305 | orchestrator | c056feaacca0 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-27 01:16:37.269310 | orchestrator | 71998ce4bec8 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-27 01:16:37.269313 | orchestrator | 1d0aed3ac962 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-27 01:16:37.269317 | orchestrator | 79f4f8d90bf7 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-27 01:16:37.269321 | orchestrator | fb164e03fde8 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-03-27 01:16:37.269325 | orchestrator | e4d67c88c6ec registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-03-27 01:16:37.269329 | orchestrator | 70f7f7199fa1 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-03-27 01:16:37.269333 | orchestrator | 9f6cb9308c82 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-27 01:16:37.269336 | orchestrator | 5e0446c1e3de registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-03-27 01:16:37.269340 | orchestrator | 79db96b2b0e7 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-03-27 01:16:37.269347 | orchestrator | 88c37a4c2b4d registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-03-27 01:16:37.269353 | orchestrator | fcd15a7e8c77 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2026-03-27 01:16:37.269359 | orchestrator | a8332b718b42 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2026-03-27 01:16:37.269377 | orchestrator | 440db0e04d9a registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-03-27 01:16:37.269401 | orchestrator | f5a953a0b302 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-03-27 01:16:37.269409 | orchestrator | d0a592e3f4df registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-03-27 01:16:37.269415 | orchestrator | 2e2aa6991976 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2026-03-27 01:16:37.269421 | orchestrator | c85a213db68c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-03-27 01:16:37.269427 | orchestrator | 1d01c78c52e1 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-03-27 01:16:37.269434 | orchestrator | 09fd050ca275 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-03-27 01:16:37.269451 | orchestrator | cb302576bd95 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-03-27 01:16:37.269457 | orchestrator | 7229b6b13ef8 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-03-27 01:16:37.269464 | orchestrator | 61e5bf6f1614 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-03-27 01:16:37.269470 | orchestrator | 07e5795a2ed3 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-03-27 01:16:37.269477 | orchestrator | d869932583e8 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-03-27 01:16:37.269484 | orchestrator | 308ac26f1828 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-03-27 01:16:37.269490 | orchestrator | 585628eff183 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-03-27 01:16:37.269496 | orchestrator | 77ed32999a21 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-03-27 01:16:37.269503 | orchestrator | fc0060122c61 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-03-27 01:16:37.269509 | orchestrator | c602b0c6832c registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2026-03-27 01:16:37.269515 | orchestrator | 2db1584a6e7b registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-03-27 01:16:37.269521 | orchestrator | 76175220a698 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-03-27 01:16:37.269527 | orchestrator | 052b271e5498 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2026-03-27 01:16:37.269538 | orchestrator | 8eab1bd60cf2 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-03-27 01:16:37.269544 | orchestrator | ae6cea492973 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-03-27 01:16:37.269550 | orchestrator | 360007e604f5 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-03-27 01:16:37.269557 | orchestrator | d6ddd116e812 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-03-27 01:16:37.269563 | orchestrator | 18a70140515f registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-03-27 01:16:37.269570 | orchestrator | 61444d6068e1 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-03-27 01:16:37.269576 | orchestrator | 42401285fedf registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-03-27 01:16:37.269582 | orchestrator | 58f2249173ce registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2026-03-27 01:16:37.269587 | orchestrator | a17af363ff04 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-27 01:16:37.269593 | orchestrator | 65e2e55ec27f registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-03-27 01:16:37.269603 | orchestrator | d1a874523dad registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-03-27 01:16:37.269610 | orchestrator | 23316d2749b2 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2026-03-27 01:16:37.269617 | orchestrator | cf119dad7b97 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2026-03-27 01:16:37.269623 | orchestrator | 8e00ee626c2b registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2026-03-27 01:16:37.269629 | orchestrator | 07f01a60cd74 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2026-03-27 01:16:37.269639 | orchestrator | c31375d6191b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-03-27 01:16:37.269646 | orchestrator | 8f41b44c7cbf registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-03-27 01:16:37.269691 | orchestrator | 1491f77c449c registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-27 01:16:37.269700 | orchestrator | 650ec2ab99e2 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-03-27 01:16:37.269706 | orchestrator | dd7a1b950e35 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-03-27 01:16:37.269717 | orchestrator | 44afa2b83a60 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-03-27 01:16:37.269723 | orchestrator | b60f36a79dfc registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-03-27 01:16:37.269730 | orchestrator | 4ba505160dd5 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2026-03-27 01:16:37.269736 | orchestrator | a3a08b8d09dc registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2026-03-27 01:16:37.269742 | orchestrator | a9a1d7f1c0a9 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-27 01:16:37.371523 | orchestrator | 2026-03-27 01:16:37.371579 | orchestrator | ## Images @ testbed-node-2 2026-03-27 01:16:37.371589 | orchestrator | 2026-03-27 01:16:37.371595 | orchestrator | + echo 2026-03-27 01:16:37.371602 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-27 01:16:37.371608 | orchestrator | + echo 2026-03-27 01:16:37.371614 | orchestrator | + osism container testbed-node-2 images 2026-03-27 01:16:38.728192 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-27 01:16:38.728254 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 51d59d2a41b6 21 hours ago 1.35GB 2026-03-27 01:16:38.728266 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 4a1ba63d8d47 23 hours ago 1.57GB 2026-03-27 01:16:38.728283 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 fa16f377b9ca 23 hours ago 1.54GB 2026-03-27 01:16:38.728291 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a4acb9fc910d 23 hours ago 287MB 2026-03-27 01:16:38.728298 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 3898c8190568 23 hours ago 285MB 2026-03-27 01:16:38.728305 | orchestrator | registry.osism.tech/kolla/cron 2024.2 c9efd58c29d6 23 hours ago 277MB 2026-03-27 01:16:38.728313 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 dbdbc1ba9592 23 hours ago 1.04GB 2026-03-27 01:16:38.728321 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 84283d009f42 23 hours ago 333MB 2026-03-27 01:16:38.728329 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2a11b5cabe47 23 hours ago 590MB 2026-03-27 01:16:38.728337 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 179e517dde06 23 hours ago 277MB 2026-03-27 01:16:38.728345 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 4b35b95033e8 23 hours ago 679MB 2026-03-27 01:16:38.728353 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 816f1f168f16 23 hours ago 427MB 2026-03-27 01:16:38.728361 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4a95d40d15b2 23 hours ago 309MB 2026-03-27 01:16:38.728369 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 d1443ec30884 23 hours ago 368MB 2026-03-27 01:16:38.728376 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 66153feb3e4f 23 hours ago 317MB 2026-03-27 01:16:38.728384 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 178e99f5361e 23 hours ago 303MB 2026-03-27 01:16:38.728392 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 e4410dbd0ff9 23 hours ago 312MB 2026-03-27 01:16:38.728400 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 bfb815d62a5f 23 hours ago 463MB 2026-03-27 01:16:38.728407 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 3c9db6188352 23 hours ago 1.16GB 2026-03-27 01:16:38.728427 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 6d956e3a25b0 23 hours ago 284MB 2026-03-27 01:16:38.728434 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 25ce59cb13c2 23 hours ago 290MB 2026-03-27 01:16:38.728441 | orchestrator | registry.osism.tech/kolla/redis 2024.2 530f5662802f 23 hours ago 284MB 2026-03-27 01:16:38.728448 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 1b4fdd3229f9 23 hours ago 290MB 2026-03-27 01:16:38.728454 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 fef7c7f77cfc 23 hours ago 1.14GB 2026-03-27 01:16:38.728463 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 91cbb555433e 23 hours ago 1.25GB 2026-03-27 01:16:38.728470 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 30104f43e30a 23 hours ago 1.04GB 2026-03-27 01:16:38.728478 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 05a5205fe5b4 23 hours ago 1.06GB 2026-03-27 01:16:38.728486 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 e9e673493202 23 hours ago 1.04GB 2026-03-27 01:16:38.728494 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3fc93f95995d 23 hours ago 1.06GB 2026-03-27 01:16:38.728501 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 13e09bb4aa5f 23 hours ago 1.04GB 2026-03-27 01:16:38.728510 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 1e1953c447ea 23 hours ago 1GB 2026-03-27 01:16:38.728518 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 bf1d8111b5a8 23 hours ago 1GB 2026-03-27 01:16:38.728526 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 937c8ef2a0bc 23 hours ago 1GB 2026-03-27 01:16:38.728532 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 3d53e89b1b38 23 hours ago 1.11GB 2026-03-27 01:16:38.728536 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 72b93fc3ec6c 23 hours ago 1.42GB 2026-03-27 01:16:38.728540 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 7bfe8c737cd7 23 hours ago 1.42GB 2026-03-27 01:16:38.728553 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 c7fb0ab163d1 23 hours ago 1.73GB 2026-03-27 01:16:38.728558 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 a8e1e1f3b029 23 hours ago 1.42GB 2026-03-27 01:16:38.728562 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 21651d3f361f 23 hours ago 1.17GB 2026-03-27 01:16:38.728566 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8081f4304ddf 23 hours ago 986MB 2026-03-27 01:16:38.728570 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 58dee68c3cca 23 hours ago 1.05GB 2026-03-27 01:16:38.728574 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0f5641e0ca65 23 hours ago 1.08GB 2026-03-27 01:16:38.728578 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 668fe6d9360b 23 hours ago 1.05GB 2026-03-27 01:16:38.728582 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 09d9c8072f6c 23 hours ago 1e+03MB 2026-03-27 01:16:38.728587 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 55856ab7d868 23 hours ago 995MB 2026-03-27 01:16:38.728591 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d14bcb1ad2fe 23 hours ago 1e+03MB 2026-03-27 01:16:38.728595 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 0b2a9e3c42ee 23 hours ago 995MB 2026-03-27 01:16:38.728599 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0880264a512d 23 hours ago 995MB 2026-03-27 01:16:38.728608 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 254f09a86c1e 23 hours ago 994MB 2026-03-27 01:16:38.728617 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 e006f47563bc 23 hours ago 1.22GB 2026-03-27 01:16:38.728621 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 182635bf5d09 23 hours ago 1.22GB 2026-03-27 01:16:38.728625 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 5c5b163c1f6c 23 hours ago 1.38GB 2026-03-27 01:16:38.728629 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 09c703a068c0 23 hours ago 1.22GB 2026-03-27 01:16:38.728633 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 f431e1de7e75 23 hours ago 851MB 2026-03-27 01:16:38.728637 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 acd90303a99f 23 hours ago 851MB 2026-03-27 01:16:38.728641 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 47a643f43ce7 23 hours ago 851MB 2026-03-27 01:16:38.728645 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 86e689ff7291 23 hours ago 851MB 2026-03-27 01:16:38.831310 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-27 01:16:38.836624 | orchestrator | + set -e 2026-03-27 01:16:38.836738 | orchestrator | + source /opt/manager-vars.sh 2026-03-27 01:16:38.837373 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-27 01:16:38.837392 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-27 01:16:38.837399 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-27 01:16:38.837405 | orchestrator | ++ CEPH_VERSION=reef 2026-03-27 01:16:38.837411 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-27 01:16:38.837417 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-27 01:16:38.837423 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-27 01:16:38.837429 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-27 01:16:38.837434 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-27 01:16:38.837440 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-27 01:16:38.837445 | orchestrator | ++ export ARA=false 2026-03-27 01:16:38.837452 | orchestrator | ++ ARA=false 2026-03-27 01:16:38.837457 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-27 01:16:38.837463 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-27 01:16:38.837468 | orchestrator | ++ export TEMPEST=true 2026-03-27 01:16:38.837472 | orchestrator | ++ TEMPEST=true 2026-03-27 01:16:38.837477 | orchestrator | ++ export IS_ZUUL=true 2026-03-27 01:16:38.837482 | orchestrator | ++ IS_ZUUL=true 2026-03-27 01:16:38.837487 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 01:16:38.837492 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 01:16:38.837497 | orchestrator | ++ export EXTERNAL_API=false 2026-03-27 01:16:38.837501 | orchestrator | ++ EXTERNAL_API=false 2026-03-27 01:16:38.837506 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-27 01:16:38.837511 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-27 01:16:38.837516 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-27 01:16:38.837521 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-27 01:16:38.837525 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-27 01:16:38.837530 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-27 01:16:38.837535 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-27 01:16:38.837540 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-27 01:16:38.847057 | orchestrator | + set -e 2026-03-27 01:16:38.847106 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-27 01:16:38.847112 | orchestrator | ++ export INTERACTIVE=false 2026-03-27 01:16:38.847119 | orchestrator | ++ INTERACTIVE=false 2026-03-27 01:16:38.847124 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-27 01:16:38.847129 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-27 01:16:38.847135 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-27 01:16:38.847931 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-27 01:16:38.850230 | orchestrator | 2026-03-27 01:16:38.850265 | orchestrator | # Ceph status 2026-03-27 01:16:38.850271 | orchestrator | 2026-03-27 01:16:38.850276 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-27 01:16:38.850280 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-27 01:16:38.850285 | orchestrator | + echo 2026-03-27 01:16:38.850289 | orchestrator | + echo '# Ceph status' 2026-03-27 01:16:38.850293 | orchestrator | + echo 2026-03-27 01:16:38.850297 | orchestrator | + ceph -s 2026-03-27 01:16:39.362368 | orchestrator | cluster: 2026-03-27 01:16:39.362442 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-27 01:16:39.362449 | orchestrator | health: HEALTH_OK 2026-03-27 01:16:39.362454 | orchestrator | 2026-03-27 01:16:39.362458 | orchestrator | services: 2026-03-27 01:16:39.362462 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2026-03-27 01:16:39.362466 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-2, testbed-node-0 2026-03-27 01:16:39.362471 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-27 01:16:39.362475 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 23m) 2026-03-27 01:16:39.362479 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-27 01:16:39.362482 | orchestrator | 2026-03-27 01:16:39.362486 | orchestrator | data: 2026-03-27 01:16:39.362490 | orchestrator | volumes: 1/1 healthy 2026-03-27 01:16:39.362494 | orchestrator | pools: 14 pools, 401 pgs 2026-03-27 01:16:39.362497 | orchestrator | objects: 556 objects, 2.2 GiB 2026-03-27 01:16:39.362501 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-27 01:16:39.362505 | orchestrator | pgs: 401 active+clean 2026-03-27 01:16:39.362509 | orchestrator | 2026-03-27 01:16:39.404675 | orchestrator | 2026-03-27 01:16:39.404723 | orchestrator | # Ceph versions 2026-03-27 01:16:39.404731 | orchestrator | 2026-03-27 01:16:39.404739 | orchestrator | + echo 2026-03-27 01:16:39.404746 | orchestrator | + echo '# Ceph versions' 2026-03-27 01:16:39.404753 | orchestrator | + echo 2026-03-27 01:16:39.404759 | orchestrator | + ceph versions 2026-03-27 01:16:39.953459 | orchestrator | { 2026-03-27 01:16:39.953524 | orchestrator | "mon": { 2026-03-27 01:16:39.953535 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-27 01:16:39.953544 | orchestrator | }, 2026-03-27 01:16:39.953552 | orchestrator | "mgr": { 2026-03-27 01:16:39.953573 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-27 01:16:39.953581 | orchestrator | }, 2026-03-27 01:16:39.953589 | orchestrator | "osd": { 2026-03-27 01:16:39.953597 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-03-27 01:16:39.953605 | orchestrator | }, 2026-03-27 01:16:39.953613 | orchestrator | "mds": { 2026-03-27 01:16:39.953621 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-27 01:16:39.953629 | orchestrator | }, 2026-03-27 01:16:39.953637 | orchestrator | "rgw": { 2026-03-27 01:16:39.953646 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-27 01:16:39.953731 | orchestrator | }, 2026-03-27 01:16:39.953739 | orchestrator | "overall": { 2026-03-27 01:16:39.953748 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-03-27 01:16:39.953756 | orchestrator | } 2026-03-27 01:16:39.953764 | orchestrator | } 2026-03-27 01:16:40.016215 | orchestrator | 2026-03-27 01:16:40.016262 | orchestrator | # Ceph OSD tree 2026-03-27 01:16:40.016268 | orchestrator | 2026-03-27 01:16:40.016273 | orchestrator | + echo 2026-03-27 01:16:40.016277 | orchestrator | + echo '# Ceph OSD tree' 2026-03-27 01:16:40.016282 | orchestrator | + echo 2026-03-27 01:16:40.016286 | orchestrator | + ceph osd df tree 2026-03-27 01:16:40.571743 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-27 01:16:40.571807 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-03-27 01:16:40.571816 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-03-27 01:16:40.571823 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.00 1.18 204 up osd.1 2026-03-27 01:16:40.571830 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 988 MiB 915 MiB 1 KiB 74 MiB 19 GiB 4.83 0.82 186 up osd.4 2026-03-27 01:16:40.571837 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-03-27 01:16:40.571844 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1011 MiB 1 KiB 74 MiB 19 GiB 5.30 0.90 174 up osd.0 2026-03-27 01:16:40.571852 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.53 1.10 218 up osd.3 2026-03-27 01:16:40.571875 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-03-27 01:16:40.571879 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.69 1.13 191 up osd.2 2026-03-27 01:16:40.571883 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 979 MiB 1 KiB 74 MiB 19 GiB 5.15 0.87 197 up osd.5 2026-03-27 01:16:40.571887 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-03-27 01:16:40.571891 | orchestrator | MIN/MAX VAR: 0.82/1.18 STDDEV: 0.85 2026-03-27 01:16:40.629603 | orchestrator | 2026-03-27 01:16:40.629688 | orchestrator | # Ceph monitor status 2026-03-27 01:16:40.629700 | orchestrator | 2026-03-27 01:16:40.629708 | orchestrator | + echo 2026-03-27 01:16:40.629715 | orchestrator | + echo '# Ceph monitor status' 2026-03-27 01:16:40.629721 | orchestrator | + echo 2026-03-27 01:16:40.629728 | orchestrator | + ceph mon stat 2026-03-27 01:16:41.169546 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-27 01:16:41.214146 | orchestrator | 2026-03-27 01:16:41.214202 | orchestrator | # Ceph quorum status 2026-03-27 01:16:41.214212 | orchestrator | 2026-03-27 01:16:41.214220 | orchestrator | + echo 2026-03-27 01:16:41.214225 | orchestrator | + echo '# Ceph quorum status' 2026-03-27 01:16:41.214229 | orchestrator | + echo 2026-03-27 01:16:41.215384 | orchestrator | + ceph quorum_status 2026-03-27 01:16:41.215427 | orchestrator | + jq 2026-03-27 01:16:41.803505 | orchestrator | { 2026-03-27 01:16:41.803592 | orchestrator | "election_epoch": 6, 2026-03-27 01:16:41.803606 | orchestrator | "quorum": [ 2026-03-27 01:16:41.803615 | orchestrator | 0, 2026-03-27 01:16:41.803623 | orchestrator | 1, 2026-03-27 01:16:41.803633 | orchestrator | 2 2026-03-27 01:16:41.803731 | orchestrator | ], 2026-03-27 01:16:41.803749 | orchestrator | "quorum_names": [ 2026-03-27 01:16:41.803799 | orchestrator | "testbed-node-0", 2026-03-27 01:16:41.803813 | orchestrator | "testbed-node-1", 2026-03-27 01:16:41.803827 | orchestrator | "testbed-node-2" 2026-03-27 01:16:41.803841 | orchestrator | ], 2026-03-27 01:16:41.803855 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-27 01:16:41.803869 | orchestrator | "quorum_age": 1586, 2026-03-27 01:16:41.803882 | orchestrator | "features": { 2026-03-27 01:16:41.803896 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-27 01:16:41.803910 | orchestrator | "quorum_mon": [ 2026-03-27 01:16:41.803924 | orchestrator | "kraken", 2026-03-27 01:16:41.803937 | orchestrator | "luminous", 2026-03-27 01:16:41.803951 | orchestrator | "mimic", 2026-03-27 01:16:41.803964 | orchestrator | "osdmap-prune", 2026-03-27 01:16:41.803978 | orchestrator | "nautilus", 2026-03-27 01:16:41.803991 | orchestrator | "octopus", 2026-03-27 01:16:41.804005 | orchestrator | "pacific", 2026-03-27 01:16:41.804013 | orchestrator | "elector-pinging", 2026-03-27 01:16:41.804021 | orchestrator | "quincy", 2026-03-27 01:16:41.804029 | orchestrator | "reef" 2026-03-27 01:16:41.804038 | orchestrator | ] 2026-03-27 01:16:41.804053 | orchestrator | }, 2026-03-27 01:16:41.804066 | orchestrator | "monmap": { 2026-03-27 01:16:41.804079 | orchestrator | "epoch": 1, 2026-03-27 01:16:41.804093 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-27 01:16:41.804107 | orchestrator | "modified": "2026-03-27T00:49:53.540430Z", 2026-03-27 01:16:41.804122 | orchestrator | "created": "2026-03-27T00:49:53.540430Z", 2026-03-27 01:16:41.804136 | orchestrator | "min_mon_release": 18, 2026-03-27 01:16:41.804145 | orchestrator | "min_mon_release_name": "reef", 2026-03-27 01:16:41.804154 | orchestrator | "election_strategy": 1, 2026-03-27 01:16:41.804162 | orchestrator | "disallowed_leaders": "", 2026-03-27 01:16:41.804172 | orchestrator | "stretch_mode": false, 2026-03-27 01:16:41.804180 | orchestrator | "tiebreaker_mon": "", 2026-03-27 01:16:41.804237 | orchestrator | "removed_ranks": "", 2026-03-27 01:16:41.804247 | orchestrator | "features": { 2026-03-27 01:16:41.804256 | orchestrator | "persistent": [ 2026-03-27 01:16:41.804265 | orchestrator | "kraken", 2026-03-27 01:16:41.804274 | orchestrator | "luminous", 2026-03-27 01:16:41.804283 | orchestrator | "mimic", 2026-03-27 01:16:41.804291 | orchestrator | "osdmap-prune", 2026-03-27 01:16:41.804318 | orchestrator | "nautilus", 2026-03-27 01:16:41.804328 | orchestrator | "octopus", 2026-03-27 01:16:41.804339 | orchestrator | "pacific", 2026-03-27 01:16:41.804366 | orchestrator | "elector-pinging", 2026-03-27 01:16:41.804390 | orchestrator | "quincy", 2026-03-27 01:16:41.804403 | orchestrator | "reef" 2026-03-27 01:16:41.804416 | orchestrator | ], 2026-03-27 01:16:41.804429 | orchestrator | "optional": [] 2026-03-27 01:16:41.804443 | orchestrator | }, 2026-03-27 01:16:41.804456 | orchestrator | "mons": [ 2026-03-27 01:16:41.804469 | orchestrator | { 2026-03-27 01:16:41.804481 | orchestrator | "rank": 0, 2026-03-27 01:16:41.804495 | orchestrator | "name": "testbed-node-0", 2026-03-27 01:16:41.804508 | orchestrator | "public_addrs": { 2026-03-27 01:16:41.804522 | orchestrator | "addrvec": [ 2026-03-27 01:16:41.804535 | orchestrator | { 2026-03-27 01:16:41.804549 | orchestrator | "type": "v2", 2026-03-27 01:16:41.804565 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-27 01:16:41.804580 | orchestrator | "nonce": 0 2026-03-27 01:16:41.804596 | orchestrator | }, 2026-03-27 01:16:41.804612 | orchestrator | { 2026-03-27 01:16:41.804628 | orchestrator | "type": "v1", 2026-03-27 01:16:41.804641 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-27 01:16:41.804675 | orchestrator | "nonce": 0 2026-03-27 01:16:41.804689 | orchestrator | } 2026-03-27 01:16:41.804703 | orchestrator | ] 2026-03-27 01:16:41.804716 | orchestrator | }, 2026-03-27 01:16:41.804730 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-27 01:16:41.804743 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-27 01:16:41.804757 | orchestrator | "priority": 0, 2026-03-27 01:16:41.804771 | orchestrator | "weight": 0, 2026-03-27 01:16:41.804785 | orchestrator | "crush_location": "{}" 2026-03-27 01:16:41.804799 | orchestrator | }, 2026-03-27 01:16:41.804813 | orchestrator | { 2026-03-27 01:16:41.804826 | orchestrator | "rank": 1, 2026-03-27 01:16:41.804839 | orchestrator | "name": "testbed-node-1", 2026-03-27 01:16:41.804852 | orchestrator | "public_addrs": { 2026-03-27 01:16:41.804863 | orchestrator | "addrvec": [ 2026-03-27 01:16:41.804875 | orchestrator | { 2026-03-27 01:16:41.804886 | orchestrator | "type": "v2", 2026-03-27 01:16:41.804898 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-27 01:16:41.804910 | orchestrator | "nonce": 0 2026-03-27 01:16:41.804923 | orchestrator | }, 2026-03-27 01:16:41.804937 | orchestrator | { 2026-03-27 01:16:41.804950 | orchestrator | "type": "v1", 2026-03-27 01:16:41.804964 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-27 01:16:41.804977 | orchestrator | "nonce": 0 2026-03-27 01:16:41.804991 | orchestrator | } 2026-03-27 01:16:41.805004 | orchestrator | ] 2026-03-27 01:16:41.805018 | orchestrator | }, 2026-03-27 01:16:41.805050 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-27 01:16:41.805065 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-27 01:16:41.805078 | orchestrator | "priority": 0, 2026-03-27 01:16:41.805091 | orchestrator | "weight": 0, 2026-03-27 01:16:41.805105 | orchestrator | "crush_location": "{}" 2026-03-27 01:16:41.805118 | orchestrator | }, 2026-03-27 01:16:41.805132 | orchestrator | { 2026-03-27 01:16:41.805145 | orchestrator | "rank": 2, 2026-03-27 01:16:41.805159 | orchestrator | "name": "testbed-node-2", 2026-03-27 01:16:41.805173 | orchestrator | "public_addrs": { 2026-03-27 01:16:41.805187 | orchestrator | "addrvec": [ 2026-03-27 01:16:41.805200 | orchestrator | { 2026-03-27 01:16:41.805214 | orchestrator | "type": "v2", 2026-03-27 01:16:41.805228 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-27 01:16:41.805241 | orchestrator | "nonce": 0 2026-03-27 01:16:41.805254 | orchestrator | }, 2026-03-27 01:16:41.805268 | orchestrator | { 2026-03-27 01:16:41.805282 | orchestrator | "type": "v1", 2026-03-27 01:16:41.805295 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-27 01:16:41.805308 | orchestrator | "nonce": 0 2026-03-27 01:16:41.805321 | orchestrator | } 2026-03-27 01:16:41.805334 | orchestrator | ] 2026-03-27 01:16:41.805347 | orchestrator | }, 2026-03-27 01:16:41.805360 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-27 01:16:41.805374 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-27 01:16:41.805387 | orchestrator | "priority": 0, 2026-03-27 01:16:41.805400 | orchestrator | "weight": 0, 2026-03-27 01:16:41.805413 | orchestrator | "crush_location": "{}" 2026-03-27 01:16:41.805439 | orchestrator | } 2026-03-27 01:16:41.805454 | orchestrator | ] 2026-03-27 01:16:41.805468 | orchestrator | } 2026-03-27 01:16:41.805482 | orchestrator | } 2026-03-27 01:16:41.805667 | orchestrator | 2026-03-27 01:16:41.805689 | orchestrator | # Ceph free space status 2026-03-27 01:16:41.805704 | orchestrator | 2026-03-27 01:16:41.805719 | orchestrator | + echo 2026-03-27 01:16:41.805734 | orchestrator | + echo '# Ceph free space status' 2026-03-27 01:16:41.805749 | orchestrator | + echo 2026-03-27 01:16:41.805764 | orchestrator | + ceph df 2026-03-27 01:16:42.378060 | orchestrator | --- RAW STORAGE --- 2026-03-27 01:16:42.378145 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-27 01:16:42.378170 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-27 01:16:42.378180 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-27 01:16:42.378203 | orchestrator | 2026-03-27 01:16:42.378214 | orchestrator | --- POOLS --- 2026-03-27 01:16:42.378225 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-27 01:16:42.378236 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-27 01:16:42.378246 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-27 01:16:42.378256 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-27 01:16:42.378266 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-27 01:16:42.378275 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-27 01:16:42.378285 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-27 01:16:42.378296 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-27 01:16:42.378314 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-27 01:16:42.378329 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-03-27 01:16:42.378346 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-27 01:16:42.378361 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-27 01:16:42.378376 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-03-27 01:16:42.378390 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-27 01:16:42.378404 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-27 01:16:42.419214 | orchestrator | ++ semver latest 5.0.0 2026-03-27 01:16:42.462185 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-27 01:16:42.462238 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-27 01:16:42.462245 | orchestrator | + osism apply facts 2026-03-27 01:16:53.824449 | orchestrator | 2026-03-27 01:16:53 | INFO  | Prepare task for execution of facts. 2026-03-27 01:16:53.895056 | orchestrator | 2026-03-27 01:16:53 | INFO  | Task 8a1ecd91-54ca-4c9f-9fb8-dc512680d4e5 (facts) was prepared for execution. 2026-03-27 01:16:53.895123 | orchestrator | 2026-03-27 01:16:53 | INFO  | It takes a moment until task 8a1ecd91-54ca-4c9f-9fb8-dc512680d4e5 (facts) has been started and output is visible here. 2026-03-27 01:17:06.613370 | orchestrator | 2026-03-27 01:17:06.613439 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-27 01:17:06.613446 | orchestrator | 2026-03-27 01:17:06.613450 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-27 01:17:06.613455 | orchestrator | Friday 27 March 2026 01:16:56 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-03-27 01:17:06.613459 | orchestrator | ok: [testbed-manager] 2026-03-27 01:17:06.613463 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:06.613467 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:17:06.613471 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:17:06.613475 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:17:06.613479 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:17:06.613483 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:17:06.613486 | orchestrator | 2026-03-27 01:17:06.613490 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-27 01:17:06.613512 | orchestrator | Friday 27 March 2026 01:16:58 +0000 (0:00:01.431) 0:00:01.736 ********** 2026-03-27 01:17:06.613524 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:17:06.613529 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:06.613533 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:17:06.613537 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:17:06.613541 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:17:06.613546 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:17:06.613552 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:17:06.613560 | orchestrator | 2026-03-27 01:17:06.613570 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-27 01:17:06.613577 | orchestrator | 2026-03-27 01:17:06.613583 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-27 01:17:06.613590 | orchestrator | Friday 27 March 2026 01:16:59 +0000 (0:00:01.164) 0:00:02.901 ********** 2026-03-27 01:17:06.613596 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:17:06.613601 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:06.613607 | orchestrator | ok: [testbed-manager] 2026-03-27 01:17:06.613647 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:17:06.613654 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:17:06.613660 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:17:06.613666 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:17:06.613672 | orchestrator | 2026-03-27 01:17:06.613679 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-27 01:17:06.613685 | orchestrator | 2026-03-27 01:17:06.613692 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-27 01:17:06.613698 | orchestrator | Friday 27 March 2026 01:17:05 +0000 (0:00:06.207) 0:00:09.108 ********** 2026-03-27 01:17:06.613704 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:17:06.613711 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:06.613717 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:17:06.613723 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:17:06.613730 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:17:06.613739 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:17:06.613745 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:17:06.613752 | orchestrator | 2026-03-27 01:17:06.613759 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:17:06.613765 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:17:06.613773 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:17:06.613779 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:17:06.613786 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:17:06.613792 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:17:06.613799 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:17:06.613805 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:17:06.613812 | orchestrator | 2026-03-27 01:17:06.613819 | orchestrator | 2026-03-27 01:17:06.613826 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:17:06.613833 | orchestrator | Friday 27 March 2026 01:17:06 +0000 (0:00:00.638) 0:00:09.747 ********** 2026-03-27 01:17:06.613839 | orchestrator | =============================================================================== 2026-03-27 01:17:06.613846 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.21s 2026-03-27 01:17:06.613861 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.43s 2026-03-27 01:17:06.613868 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.16s 2026-03-27 01:17:06.613876 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2026-03-27 01:17:06.740694 | orchestrator | + osism validate ceph-mons 2026-03-27 01:17:36.031744 | orchestrator | 2026-03-27 01:17:36.031794 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-27 01:17:36.031800 | orchestrator | 2026-03-27 01:17:36.031804 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-27 01:17:36.031807 | orchestrator | Friday 27 March 2026 01:17:21 +0000 (0:00:00.469) 0:00:00.469 ********** 2026-03-27 01:17:36.031811 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:17:36.031814 | orchestrator | 2026-03-27 01:17:36.031817 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-27 01:17:36.031820 | orchestrator | Friday 27 March 2026 01:17:22 +0000 (0:00:00.906) 0:00:01.375 ********** 2026-03-27 01:17:36.031823 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:17:36.031826 | orchestrator | 2026-03-27 01:17:36.031829 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-27 01:17:36.031833 | orchestrator | Friday 27 March 2026 01:17:22 +0000 (0:00:00.633) 0:00:02.009 ********** 2026-03-27 01:17:36.031836 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.031839 | orchestrator | 2026-03-27 01:17:36.031842 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-27 01:17:36.031845 | orchestrator | Friday 27 March 2026 01:17:22 +0000 (0:00:00.097) 0:00:02.107 ********** 2026-03-27 01:17:36.031848 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.031851 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:17:36.031854 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:17:36.031857 | orchestrator | 2026-03-27 01:17:36.031860 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-27 01:17:36.031863 | orchestrator | Friday 27 March 2026 01:17:23 +0000 (0:00:00.247) 0:00:02.354 ********** 2026-03-27 01:17:36.031866 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:17:36.031870 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:17:36.031873 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.031876 | orchestrator | 2026-03-27 01:17:36.031879 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-27 01:17:36.031883 | orchestrator | Friday 27 March 2026 01:17:24 +0000 (0:00:01.594) 0:00:03.949 ********** 2026-03-27 01:17:36.031889 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.031895 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:17:36.031903 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:17:36.031908 | orchestrator | 2026-03-27 01:17:36.031915 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-27 01:17:36.031920 | orchestrator | Friday 27 March 2026 01:17:24 +0000 (0:00:00.266) 0:00:04.215 ********** 2026-03-27 01:17:36.031925 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.031930 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:17:36.031935 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:17:36.031941 | orchestrator | 2026-03-27 01:17:36.031947 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-27 01:17:36.031952 | orchestrator | Friday 27 March 2026 01:17:25 +0000 (0:00:00.270) 0:00:04.486 ********** 2026-03-27 01:17:36.031958 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.031963 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:17:36.031969 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:17:36.031974 | orchestrator | 2026-03-27 01:17:36.031980 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-27 01:17:36.031986 | orchestrator | Friday 27 March 2026 01:17:25 +0000 (0:00:00.276) 0:00:04.763 ********** 2026-03-27 01:17:36.031991 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032007 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:17:36.032013 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:17:36.032021 | orchestrator | 2026-03-27 01:17:36.032026 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-27 01:17:36.032032 | orchestrator | Friday 27 March 2026 01:17:25 +0000 (0:00:00.360) 0:00:05.123 ********** 2026-03-27 01:17:36.032037 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.032043 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:17:36.032050 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:17:36.032056 | orchestrator | 2026-03-27 01:17:36.032066 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-27 01:17:36.032071 | orchestrator | Friday 27 March 2026 01:17:26 +0000 (0:00:00.291) 0:00:05.414 ********** 2026-03-27 01:17:36.032075 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032083 | orchestrator | 2026-03-27 01:17:36.032089 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-27 01:17:36.032094 | orchestrator | Friday 27 March 2026 01:17:26 +0000 (0:00:00.231) 0:00:05.646 ********** 2026-03-27 01:17:36.032099 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032104 | orchestrator | 2026-03-27 01:17:36.032109 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-27 01:17:36.032113 | orchestrator | Friday 27 March 2026 01:17:26 +0000 (0:00:00.223) 0:00:05.869 ********** 2026-03-27 01:17:36.032119 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032124 | orchestrator | 2026-03-27 01:17:36.032129 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:17:36.032134 | orchestrator | Friday 27 March 2026 01:17:26 +0000 (0:00:00.241) 0:00:06.111 ********** 2026-03-27 01:17:36.032138 | orchestrator | 2026-03-27 01:17:36.032143 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:17:36.032148 | orchestrator | Friday 27 March 2026 01:17:26 +0000 (0:00:00.065) 0:00:06.176 ********** 2026-03-27 01:17:36.032154 | orchestrator | 2026-03-27 01:17:36.032159 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:17:36.032163 | orchestrator | Friday 27 March 2026 01:17:26 +0000 (0:00:00.064) 0:00:06.241 ********** 2026-03-27 01:17:36.032168 | orchestrator | 2026-03-27 01:17:36.032172 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-27 01:17:36.032177 | orchestrator | Friday 27 March 2026 01:17:27 +0000 (0:00:00.174) 0:00:06.416 ********** 2026-03-27 01:17:36.032181 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032186 | orchestrator | 2026-03-27 01:17:36.032191 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-27 01:17:36.032197 | orchestrator | Friday 27 March 2026 01:17:27 +0000 (0:00:00.228) 0:00:06.644 ********** 2026-03-27 01:17:36.032202 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032207 | orchestrator | 2026-03-27 01:17:36.032222 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-27 01:17:36.032226 | orchestrator | Friday 27 March 2026 01:17:27 +0000 (0:00:00.237) 0:00:06.882 ********** 2026-03-27 01:17:36.032229 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.032232 | orchestrator | 2026-03-27 01:17:36.032236 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-27 01:17:36.032239 | orchestrator | Friday 27 March 2026 01:17:27 +0000 (0:00:00.104) 0:00:06.986 ********** 2026-03-27 01:17:36.032242 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:17:36.032245 | orchestrator | 2026-03-27 01:17:36.032248 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-27 01:17:36.032251 | orchestrator | Friday 27 March 2026 01:17:29 +0000 (0:00:01.584) 0:00:08.570 ********** 2026-03-27 01:17:36.032254 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.032257 | orchestrator | 2026-03-27 01:17:36.032260 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-27 01:17:36.032263 | orchestrator | Friday 27 March 2026 01:17:29 +0000 (0:00:00.309) 0:00:08.880 ********** 2026-03-27 01:17:36.032271 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032274 | orchestrator | 2026-03-27 01:17:36.032277 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-27 01:17:36.032281 | orchestrator | Friday 27 March 2026 01:17:29 +0000 (0:00:00.121) 0:00:09.001 ********** 2026-03-27 01:17:36.032284 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.032287 | orchestrator | 2026-03-27 01:17:36.032290 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-27 01:17:36.032293 | orchestrator | Friday 27 March 2026 01:17:30 +0000 (0:00:00.315) 0:00:09.317 ********** 2026-03-27 01:17:36.032299 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.032302 | orchestrator | 2026-03-27 01:17:36.032305 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-27 01:17:36.032308 | orchestrator | Friday 27 March 2026 01:17:30 +0000 (0:00:00.286) 0:00:09.604 ********** 2026-03-27 01:17:36.032311 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032314 | orchestrator | 2026-03-27 01:17:36.032317 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-27 01:17:36.032320 | orchestrator | Friday 27 March 2026 01:17:30 +0000 (0:00:00.139) 0:00:09.743 ********** 2026-03-27 01:17:36.032323 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.032326 | orchestrator | 2026-03-27 01:17:36.032330 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-27 01:17:36.032335 | orchestrator | Friday 27 March 2026 01:17:30 +0000 (0:00:00.135) 0:00:09.879 ********** 2026-03-27 01:17:36.032340 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.032343 | orchestrator | 2026-03-27 01:17:36.032349 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-27 01:17:36.032353 | orchestrator | Friday 27 March 2026 01:17:30 +0000 (0:00:00.256) 0:00:10.135 ********** 2026-03-27 01:17:36.032361 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:17:36.032367 | orchestrator | 2026-03-27 01:17:36.032372 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-27 01:17:36.032377 | orchestrator | Friday 27 March 2026 01:17:32 +0000 (0:00:01.333) 0:00:11.469 ********** 2026-03-27 01:17:36.032383 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.032389 | orchestrator | 2026-03-27 01:17:36.032394 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-27 01:17:36.032398 | orchestrator | Friday 27 March 2026 01:17:32 +0000 (0:00:00.294) 0:00:11.763 ********** 2026-03-27 01:17:36.032403 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032408 | orchestrator | 2026-03-27 01:17:36.032413 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-27 01:17:36.032418 | orchestrator | Friday 27 March 2026 01:17:32 +0000 (0:00:00.137) 0:00:11.901 ********** 2026-03-27 01:17:36.032423 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:17:36.032429 | orchestrator | 2026-03-27 01:17:36.032433 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-27 01:17:36.032438 | orchestrator | Friday 27 March 2026 01:17:32 +0000 (0:00:00.147) 0:00:12.048 ********** 2026-03-27 01:17:36.032443 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032447 | orchestrator | 2026-03-27 01:17:36.032451 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-27 01:17:36.032456 | orchestrator | Friday 27 March 2026 01:17:32 +0000 (0:00:00.132) 0:00:12.181 ********** 2026-03-27 01:17:36.032461 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032466 | orchestrator | 2026-03-27 01:17:36.032471 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-27 01:17:36.032476 | orchestrator | Friday 27 March 2026 01:17:33 +0000 (0:00:00.140) 0:00:12.322 ********** 2026-03-27 01:17:36.032481 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:17:36.032487 | orchestrator | 2026-03-27 01:17:36.032492 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-27 01:17:36.032497 | orchestrator | Friday 27 March 2026 01:17:33 +0000 (0:00:00.255) 0:00:12.577 ********** 2026-03-27 01:17:36.032508 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:17:36.032513 | orchestrator | 2026-03-27 01:17:36.032520 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-27 01:17:36.032526 | orchestrator | Friday 27 March 2026 01:17:33 +0000 (0:00:00.223) 0:00:12.800 ********** 2026-03-27 01:17:36.032531 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:17:36.032536 | orchestrator | 2026-03-27 01:17:36.032541 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-27 01:17:36.032547 | orchestrator | Friday 27 March 2026 01:17:35 +0000 (0:00:01.645) 0:00:14.446 ********** 2026-03-27 01:17:36.032552 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:17:36.032558 | orchestrator | 2026-03-27 01:17:36.032563 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-27 01:17:36.032603 | orchestrator | Friday 27 March 2026 01:17:35 +0000 (0:00:00.275) 0:00:14.722 ********** 2026-03-27 01:17:36.032608 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:17:36.032614 | orchestrator | 2026-03-27 01:17:36.032626 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:17:38.210626 | orchestrator | Friday 27 March 2026 01:17:36 +0000 (0:00:00.617) 0:00:15.339 ********** 2026-03-27 01:17:38.210680 | orchestrator | 2026-03-27 01:17:38.210688 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:17:38.210694 | orchestrator | Friday 27 March 2026 01:17:36 +0000 (0:00:00.071) 0:00:15.411 ********** 2026-03-27 01:17:38.210700 | orchestrator | 2026-03-27 01:17:38.210707 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:17:38.210713 | orchestrator | Friday 27 March 2026 01:17:36 +0000 (0:00:00.068) 0:00:15.480 ********** 2026-03-27 01:17:38.210720 | orchestrator | 2026-03-27 01:17:38.210725 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-27 01:17:38.210730 | orchestrator | Friday 27 March 2026 01:17:36 +0000 (0:00:00.077) 0:00:15.557 ********** 2026-03-27 01:17:38.210736 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:17:38.210742 | orchestrator | 2026-03-27 01:17:38.210747 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-27 01:17:38.210753 | orchestrator | Friday 27 March 2026 01:17:37 +0000 (0:00:01.236) 0:00:16.793 ********** 2026-03-27 01:17:38.210758 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-27 01:17:38.210764 | orchestrator |  "msg": [ 2026-03-27 01:17:38.210770 | orchestrator |  "Validator run completed.", 2026-03-27 01:17:38.210776 | orchestrator |  "You can find the report file here:", 2026-03-27 01:17:38.210782 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-27T01:17:21+00:00-report.json", 2026-03-27 01:17:38.210788 | orchestrator |  "on the following host:", 2026-03-27 01:17:38.210794 | orchestrator |  "testbed-manager" 2026-03-27 01:17:38.210800 | orchestrator |  ] 2026-03-27 01:17:38.210806 | orchestrator | } 2026-03-27 01:17:38.210812 | orchestrator | 2026-03-27 01:17:38.210817 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:17:38.210823 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-27 01:17:38.210830 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:17:38.210836 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:17:38.210842 | orchestrator | 2026-03-27 01:17:38.210847 | orchestrator | 2026-03-27 01:17:38.210853 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:17:38.210859 | orchestrator | Friday 27 March 2026 01:17:37 +0000 (0:00:00.396) 0:00:17.190 ********** 2026-03-27 01:17:38.210879 | orchestrator | =============================================================================== 2026-03-27 01:17:38.210883 | orchestrator | Aggregate test results step one ----------------------------------------- 1.65s 2026-03-27 01:17:38.210890 | orchestrator | Get container info ------------------------------------------------------ 1.59s 2026-03-27 01:17:38.210893 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.58s 2026-03-27 01:17:38.210896 | orchestrator | Gather status data ------------------------------------------------------ 1.33s 2026-03-27 01:17:38.210899 | orchestrator | Write report file ------------------------------------------------------- 1.24s 2026-03-27 01:17:38.210902 | orchestrator | Get timestamp for report file ------------------------------------------- 0.91s 2026-03-27 01:17:38.210905 | orchestrator | Create report output directory ------------------------------------------ 0.63s 2026-03-27 01:17:38.210908 | orchestrator | Aggregate test results step three --------------------------------------- 0.62s 2026-03-27 01:17:38.210911 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-03-27 01:17:38.210914 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.36s 2026-03-27 01:17:38.210918 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-03-27 01:17:38.210921 | orchestrator | Set quorum test data ---------------------------------------------------- 0.31s 2026-03-27 01:17:38.210924 | orchestrator | Flush handlers ---------------------------------------------------------- 0.30s 2026-03-27 01:17:38.210927 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2026-03-27 01:17:38.210930 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2026-03-27 01:17:38.210933 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2026-03-27 01:17:38.210936 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2026-03-27 01:17:38.210939 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-03-27 01:17:38.210942 | orchestrator | Set test result to passed if container is existing ---------------------- 0.27s 2026-03-27 01:17:38.210945 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2026-03-27 01:17:38.392778 | orchestrator | + osism validate ceph-mgrs 2026-03-27 01:18:07.308002 | orchestrator | 2026-03-27 01:18:07.308087 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-27 01:18:07.308097 | orchestrator | 2026-03-27 01:18:07.308104 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-27 01:18:07.308111 | orchestrator | Friday 27 March 2026 01:17:53 +0000 (0:00:00.527) 0:00:00.527 ********** 2026-03-27 01:18:07.308119 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:07.308125 | orchestrator | 2026-03-27 01:18:07.308129 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-27 01:18:07.308134 | orchestrator | Friday 27 March 2026 01:17:54 +0000 (0:00:00.999) 0:00:01.527 ********** 2026-03-27 01:18:07.308138 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:07.308142 | orchestrator | 2026-03-27 01:18:07.308146 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-27 01:18:07.308150 | orchestrator | Friday 27 March 2026 01:17:55 +0000 (0:00:00.721) 0:00:02.248 ********** 2026-03-27 01:18:07.308155 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308160 | orchestrator | 2026-03-27 01:18:07.308164 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-27 01:18:07.308182 | orchestrator | Friday 27 March 2026 01:17:55 +0000 (0:00:00.117) 0:00:02.366 ********** 2026-03-27 01:18:07.308186 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308190 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:18:07.308194 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:18:07.308198 | orchestrator | 2026-03-27 01:18:07.308202 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-27 01:18:07.308205 | orchestrator | Friday 27 March 2026 01:17:55 +0000 (0:00:00.281) 0:00:02.648 ********** 2026-03-27 01:18:07.308224 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308228 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:18:07.308232 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:18:07.308236 | orchestrator | 2026-03-27 01:18:07.308239 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-27 01:18:07.308243 | orchestrator | Friday 27 March 2026 01:17:56 +0000 (0:00:01.361) 0:00:04.009 ********** 2026-03-27 01:18:07.308247 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:18:07.308251 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:18:07.308255 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:18:07.308258 | orchestrator | 2026-03-27 01:18:07.308266 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-27 01:18:07.308273 | orchestrator | Friday 27 March 2026 01:17:57 +0000 (0:00:00.316) 0:00:04.326 ********** 2026-03-27 01:18:07.308278 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308283 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:18:07.308289 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:18:07.308294 | orchestrator | 2026-03-27 01:18:07.308299 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-27 01:18:07.308305 | orchestrator | Friday 27 March 2026 01:17:57 +0000 (0:00:00.306) 0:00:04.632 ********** 2026-03-27 01:18:07.308310 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308316 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:18:07.308321 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:18:07.308327 | orchestrator | 2026-03-27 01:18:07.308332 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-27 01:18:07.308339 | orchestrator | Friday 27 March 2026 01:17:57 +0000 (0:00:00.337) 0:00:04.969 ********** 2026-03-27 01:18:07.308345 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:18:07.308352 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:18:07.308358 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:18:07.308364 | orchestrator | 2026-03-27 01:18:07.308369 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-27 01:18:07.308375 | orchestrator | Friday 27 March 2026 01:17:58 +0000 (0:00:00.480) 0:00:05.449 ********** 2026-03-27 01:18:07.308381 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308387 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:18:07.308393 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:18:07.308399 | orchestrator | 2026-03-27 01:18:07.308405 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-27 01:18:07.308411 | orchestrator | Friday 27 March 2026 01:17:58 +0000 (0:00:00.300) 0:00:05.750 ********** 2026-03-27 01:18:07.308417 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:18:07.308424 | orchestrator | 2026-03-27 01:18:07.308430 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-27 01:18:07.308436 | orchestrator | Friday 27 March 2026 01:17:58 +0000 (0:00:00.243) 0:00:05.993 ********** 2026-03-27 01:18:07.308443 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:18:07.308450 | orchestrator | 2026-03-27 01:18:07.308454 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-27 01:18:07.308458 | orchestrator | Friday 27 March 2026 01:17:59 +0000 (0:00:00.251) 0:00:06.245 ********** 2026-03-27 01:18:07.308462 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:18:07.308466 | orchestrator | 2026-03-27 01:18:07.308469 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:07.308473 | orchestrator | Friday 27 March 2026 01:17:59 +0000 (0:00:00.244) 0:00:06.489 ********** 2026-03-27 01:18:07.308477 | orchestrator | 2026-03-27 01:18:07.308480 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:07.308484 | orchestrator | Friday 27 March 2026 01:17:59 +0000 (0:00:00.067) 0:00:06.557 ********** 2026-03-27 01:18:07.308488 | orchestrator | 2026-03-27 01:18:07.308492 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:07.308495 | orchestrator | Friday 27 March 2026 01:17:59 +0000 (0:00:00.070) 0:00:06.628 ********** 2026-03-27 01:18:07.308506 | orchestrator | 2026-03-27 01:18:07.308510 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-27 01:18:07.308514 | orchestrator | Friday 27 March 2026 01:17:59 +0000 (0:00:00.212) 0:00:06.840 ********** 2026-03-27 01:18:07.308560 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:18:07.308564 | orchestrator | 2026-03-27 01:18:07.308568 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-27 01:18:07.308572 | orchestrator | Friday 27 March 2026 01:18:00 +0000 (0:00:00.288) 0:00:07.129 ********** 2026-03-27 01:18:07.308576 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:18:07.308579 | orchestrator | 2026-03-27 01:18:07.308597 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-27 01:18:07.308602 | orchestrator | Friday 27 March 2026 01:18:00 +0000 (0:00:00.259) 0:00:07.389 ********** 2026-03-27 01:18:07.308605 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308609 | orchestrator | 2026-03-27 01:18:07.308613 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-27 01:18:07.308617 | orchestrator | Friday 27 March 2026 01:18:00 +0000 (0:00:00.122) 0:00:07.512 ********** 2026-03-27 01:18:07.308621 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:18:07.308624 | orchestrator | 2026-03-27 01:18:07.308628 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-27 01:18:07.308632 | orchestrator | Friday 27 March 2026 01:18:02 +0000 (0:00:01.633) 0:00:09.145 ********** 2026-03-27 01:18:07.308636 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308639 | orchestrator | 2026-03-27 01:18:07.308643 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-27 01:18:07.308647 | orchestrator | Friday 27 March 2026 01:18:02 +0000 (0:00:00.252) 0:00:09.398 ********** 2026-03-27 01:18:07.308651 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308654 | orchestrator | 2026-03-27 01:18:07.308658 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-27 01:18:07.308662 | orchestrator | Friday 27 March 2026 01:18:02 +0000 (0:00:00.280) 0:00:09.678 ********** 2026-03-27 01:18:07.308666 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:18:07.308669 | orchestrator | 2026-03-27 01:18:07.308673 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-27 01:18:07.308677 | orchestrator | Friday 27 March 2026 01:18:02 +0000 (0:00:00.135) 0:00:09.813 ********** 2026-03-27 01:18:07.308680 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:18:07.308684 | orchestrator | 2026-03-27 01:18:07.308688 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-27 01:18:07.308692 | orchestrator | Friday 27 March 2026 01:18:02 +0000 (0:00:00.144) 0:00:09.957 ********** 2026-03-27 01:18:07.308695 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:07.308699 | orchestrator | 2026-03-27 01:18:07.308703 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-27 01:18:07.308706 | orchestrator | Friday 27 March 2026 01:18:03 +0000 (0:00:00.261) 0:00:10.219 ********** 2026-03-27 01:18:07.308715 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:18:07.308719 | orchestrator | 2026-03-27 01:18:07.308723 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-27 01:18:07.308727 | orchestrator | Friday 27 March 2026 01:18:03 +0000 (0:00:00.247) 0:00:10.466 ********** 2026-03-27 01:18:07.308730 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:07.308734 | orchestrator | 2026-03-27 01:18:07.308738 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-27 01:18:07.308741 | orchestrator | Friday 27 March 2026 01:18:04 +0000 (0:00:01.482) 0:00:11.949 ********** 2026-03-27 01:18:07.308745 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:07.308749 | orchestrator | 2026-03-27 01:18:07.308752 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-27 01:18:07.308756 | orchestrator | Friday 27 March 2026 01:18:05 +0000 (0:00:00.255) 0:00:12.204 ********** 2026-03-27 01:18:07.308764 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:07.308768 | orchestrator | 2026-03-27 01:18:07.308771 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:07.308775 | orchestrator | Friday 27 March 2026 01:18:05 +0000 (0:00:00.245) 0:00:12.450 ********** 2026-03-27 01:18:07.308779 | orchestrator | 2026-03-27 01:18:07.308782 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:07.308786 | orchestrator | Friday 27 March 2026 01:18:05 +0000 (0:00:00.072) 0:00:12.522 ********** 2026-03-27 01:18:07.308790 | orchestrator | 2026-03-27 01:18:07.308793 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:07.308797 | orchestrator | Friday 27 March 2026 01:18:05 +0000 (0:00:00.075) 0:00:12.598 ********** 2026-03-27 01:18:07.308801 | orchestrator | 2026-03-27 01:18:07.308805 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-27 01:18:07.308808 | orchestrator | Friday 27 March 2026 01:18:05 +0000 (0:00:00.088) 0:00:12.686 ********** 2026-03-27 01:18:07.308812 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:07.308816 | orchestrator | 2026-03-27 01:18:07.308819 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-27 01:18:07.308823 | orchestrator | Friday 27 March 2026 01:18:06 +0000 (0:00:01.230) 0:00:13.917 ********** 2026-03-27 01:18:07.308827 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-27 01:18:07.308831 | orchestrator |  "msg": [ 2026-03-27 01:18:07.308835 | orchestrator |  "Validator run completed.", 2026-03-27 01:18:07.308839 | orchestrator |  "You can find the report file here:", 2026-03-27 01:18:07.308843 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-27T01:17:54+00:00-report.json", 2026-03-27 01:18:07.308848 | orchestrator |  "on the following host:", 2026-03-27 01:18:07.308852 | orchestrator |  "testbed-manager" 2026-03-27 01:18:07.308856 | orchestrator |  ] 2026-03-27 01:18:07.308860 | orchestrator | } 2026-03-27 01:18:07.308864 | orchestrator | 2026-03-27 01:18:07.308868 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:18:07.308872 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 01:18:07.308876 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:18:07.308884 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:18:07.605032 | orchestrator | 2026-03-27 01:18:07.605105 | orchestrator | 2026-03-27 01:18:07.605112 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:18:07.605118 | orchestrator | Friday 27 March 2026 01:18:07 +0000 (0:00:00.427) 0:00:14.345 ********** 2026-03-27 01:18:07.605123 | orchestrator | =============================================================================== 2026-03-27 01:18:07.605127 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.63s 2026-03-27 01:18:07.605131 | orchestrator | Aggregate test results step one ----------------------------------------- 1.48s 2026-03-27 01:18:07.605135 | orchestrator | Get container info ------------------------------------------------------ 1.36s 2026-03-27 01:18:07.605139 | orchestrator | Write report file ------------------------------------------------------- 1.23s 2026-03-27 01:18:07.605143 | orchestrator | Get timestamp for report file ------------------------------------------- 1.00s 2026-03-27 01:18:07.605147 | orchestrator | Create report output directory ------------------------------------------ 0.72s 2026-03-27 01:18:07.605151 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.48s 2026-03-27 01:18:07.605154 | orchestrator | Print report file information ------------------------------------------- 0.43s 2026-03-27 01:18:07.605186 | orchestrator | Flush handlers ---------------------------------------------------------- 0.35s 2026-03-27 01:18:07.605194 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-03-27 01:18:07.605200 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-03-27 01:18:07.605207 | orchestrator | Set test result to passed if container is existing ---------------------- 0.31s 2026-03-27 01:18:07.605213 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2026-03-27 01:18:07.605220 | orchestrator | Print report file information ------------------------------------------- 0.29s 2026-03-27 01:18:07.605226 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-03-27 01:18:07.605232 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.28s 2026-03-27 01:18:07.605239 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2026-03-27 01:18:07.605245 | orchestrator | Fail due to missing containers ------------------------------------------ 0.26s 2026-03-27 01:18:07.605253 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-03-27 01:18:07.605259 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.25s 2026-03-27 01:18:07.787367 | orchestrator | + osism validate ceph-osds 2026-03-27 01:18:26.977998 | orchestrator | 2026-03-27 01:18:26.978097 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-27 01:18:26.978108 | orchestrator | 2026-03-27 01:18:26.978116 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-27 01:18:26.978122 | orchestrator | Friday 27 March 2026 01:18:22 +0000 (0:00:00.501) 0:00:00.501 ********** 2026-03-27 01:18:26.978129 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:26.978136 | orchestrator | 2026-03-27 01:18:26.978143 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-27 01:18:26.978149 | orchestrator | Friday 27 March 2026 01:18:23 +0000 (0:00:01.018) 0:00:01.520 ********** 2026-03-27 01:18:26.978155 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:26.978162 | orchestrator | 2026-03-27 01:18:26.978168 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-27 01:18:26.978175 | orchestrator | Friday 27 March 2026 01:18:24 +0000 (0:00:00.255) 0:00:01.776 ********** 2026-03-27 01:18:26.978182 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:26.978187 | orchestrator | 2026-03-27 01:18:26.978193 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-27 01:18:26.978199 | orchestrator | Friday 27 March 2026 01:18:24 +0000 (0:00:00.708) 0:00:02.484 ********** 2026-03-27 01:18:26.978204 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:26.978210 | orchestrator | 2026-03-27 01:18:26.978216 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-27 01:18:26.978221 | orchestrator | Friday 27 March 2026 01:18:25 +0000 (0:00:00.131) 0:00:02.616 ********** 2026-03-27 01:18:26.978226 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:26.978231 | orchestrator | 2026-03-27 01:18:26.978237 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-27 01:18:26.978242 | orchestrator | Friday 27 March 2026 01:18:25 +0000 (0:00:00.134) 0:00:02.750 ********** 2026-03-27 01:18:26.978248 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:26.978254 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:18:26.978259 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:18:26.978265 | orchestrator | 2026-03-27 01:18:26.978270 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-27 01:18:26.978276 | orchestrator | Friday 27 March 2026 01:18:25 +0000 (0:00:00.446) 0:00:03.196 ********** 2026-03-27 01:18:26.978282 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:26.978287 | orchestrator | 2026-03-27 01:18:26.978293 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-27 01:18:26.978313 | orchestrator | Friday 27 March 2026 01:18:25 +0000 (0:00:00.159) 0:00:03.355 ********** 2026-03-27 01:18:26.978319 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:26.978325 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:26.978331 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:26.978337 | orchestrator | 2026-03-27 01:18:26.978343 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-27 01:18:26.978349 | orchestrator | Friday 27 March 2026 01:18:26 +0000 (0:00:00.318) 0:00:03.674 ********** 2026-03-27 01:18:26.978354 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:26.978360 | orchestrator | 2026-03-27 01:18:26.978375 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-27 01:18:26.978382 | orchestrator | Friday 27 March 2026 01:18:26 +0000 (0:00:00.345) 0:00:04.019 ********** 2026-03-27 01:18:26.978388 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:26.978394 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:26.978398 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:26.978401 | orchestrator | 2026-03-27 01:18:26.978405 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-27 01:18:26.978408 | orchestrator | Friday 27 March 2026 01:18:26 +0000 (0:00:00.286) 0:00:04.306 ********** 2026-03-27 01:18:26.978413 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6a05f5d84cc5a5160e5279778f6b4f223b6d1af00c06b94eb9c9d645e5e12b63', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-27 01:18:26.978420 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8aa6f4aeeac3bd2c48cee606636a8912ade4b42804bb555eb1b6ac99decc93a1', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-27 01:18:26.978425 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ccd79ce856ab93a6a873c595da73791c8d0efebceccb9dd49c164833e92a1883', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-27 01:18:26.978431 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9b9d4e706b85567bbf180e604cf43e8c0e3d1d7a708101b752f79465c9010dfe', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-27 01:18:26.978441 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7748388736b8cfd8b1d17bf32522442f29071a24ed3885dc2d67f91613d4db07', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-27 01:18:26.978457 | orchestrator | skipping: [testbed-node-3] => (item={'id': '21cc4d56d429dbd0af86a85b64e30290b4ee946b317c3e95ec1707b1e6320c9f', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-27 01:18:26.978463 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'df11839afe35da3eb7891900b56f51b5af4851c324cb7c467f4686e61ccadddb', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-03-27 01:18:26.978468 | orchestrator | skipping: [testbed-node-3] => (item={'id': '726bebe62f834c55e8e0686484bcb67ebfc9cb58aaeabf4c1702e377fa299111', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-03-27 01:18:26.978471 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2eb46879b8c1bbd1af3bde2221806594802106c04fa27e3e00f18f6a40366827', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-27 01:18:26.978474 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cecbffc4c25a96312e2d060e60273a2aebf77eeaf7e85d4137660adc45abd0b7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-27 01:18:26.978481 | orchestrator | ok: [testbed-node-3] => (item={'id': '2abe31d7b403e7593b99b4a22a698b45b8e8e8580c949bd444b7b5087714912d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-27 01:18:26.978485 | orchestrator | ok: [testbed-node-3] => (item={'id': 'baf081773df121a9661aa826149acc1fa358cdf7e5cfcccdf4f175606ee87253', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-27 01:18:26.978544 | orchestrator | skipping: [testbed-node-3] => (item={'id': '612c2f47d9281fc761ed2b49f7027d0cedd0c1c8792df40a569298e8b08582b7', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2026-03-27 01:18:26.978547 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd764eb9972e929e995ffa2f4f8cb169d12f07a45ec9b4c76601ca93c2ab86c8e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-27 01:18:26.978553 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd9648aabf7c0a3872e2b182e5686da35c1dd6e77558d1ce45e96777ebac69484', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-27 01:18:26.978559 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6032c92709e3438c58c5e5a9b9db2afe1edd970cb084dc79cf6b3afcdaf94616', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-27 01:18:26.978567 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a73d09bbccc1cfa1f4fd881f89583d093a3da6f938704555de687859147a48fa', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-27 01:18:26.978573 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a2f9c167bc6dd766e1a4d449ef7c035113e433a5b031915163a554bcd28c29da', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-27 01:18:26.978578 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aa31c00bf7d08a2d0e3964c9f866cdae0f4586e01fda1fc0f9a026388af73e00', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-27 01:18:26.978583 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'de233c76d3c74b83aed91b61a5d41bafcc5bd9050ddcb1ba9317a4cc90b7446f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-27 01:18:26.978591 | orchestrator | skipping: [testbed-node-4] => (item={'id': '241a20e199453592bdc76321d5738da1c905a7f058972fdb897a5cbfeabe15aa', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-27 01:18:26.978602 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a03dc7cd9f1baa479089929da7985bd7bb1b466bc431bdfccc0b829bc53687ea', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-27 01:18:27.118664 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'af39d22b2d4e30780361a0aae2e573a93efd1798127cbca045000978648944df', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-27 01:18:27.118725 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a4ff77b07c2e853820a28ce36d9119a61fd56c392acac8dfdb83d29ba3bd053', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-27 01:18:27.118748 | orchestrator | skipping: [testbed-node-4] => (item={'id': '914364b976677758dcfe563ff564819f8cd64d951d0563785d8acc6fc5f3691f', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-03-27 01:18:27.118754 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0b9a6ccbd499750c381d59b380f03bb12500eeb0520d8f12c553ac9d69ef456f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-03-27 01:18:27.118759 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a21f9f45871c16d7084d4acdd0cf6ef2e4895ab6d149fcc9288ab39f7ab3fce1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-27 01:18:27.118765 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ceb0ff52ed1f4394dffde9304f2bd7a3a00a8eeecee1096598e2c046225c6fcc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-27 01:18:27.118771 | orchestrator | ok: [testbed-node-4] => (item={'id': '763a67afba8514f30eae63ef6b68230054b970a809c77821aa1f9e0a50ad96a0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-27 01:18:27.118776 | orchestrator | ok: [testbed-node-4] => (item={'id': '2d45a7ac572db753c4fb23863ca85afc9f1ee8a25176cf7ee43aea80e110f0dd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-27 01:18:27.118782 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9234adad7662ebdeefbc7a34d8040f835b7abcd02c48f34beca03230c240073e', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2026-03-27 01:18:27.118788 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b6e05e5193af90e4244d79ab3a4a4faf2c88d674c7a5c23d553f54180118e8d2', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-27 01:18:27.118792 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7e631612fb865dc65adf3b988ac7731ea039c5ba381e15cf6a3a72dec09d4adb', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-27 01:18:27.118798 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'af873d15a0684ccced89e7486e8e2ffb97637a7884d075bd6eae82dc9e83ea8c', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-27 01:18:27.118803 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7f45eef4c32fdda5f918e84534115a36ed83ef7d50eb10a4b756f3c6b1961641', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-27 01:18:27.118809 | orchestrator | skipping: [testbed-node-4] => (item={'id': '15311fefc18263be5311b777f2c6c10c38bdfde867ea6f4500c523328a22eb31', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-27 01:18:27.118814 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'faeecf5ceddc60a21a8daa75046425cbef3b0d28d6864bcd5cb6e5afda57624f', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-27 01:18:27.118830 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1cd77e7e2d09a5a77089ab56b161e6d19ae4e8879e6f4f5f9f29715f9d6bac43', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-27 01:18:27.118838 | orchestrator | skipping: [testbed-node-5] => (item={'id': '69182ba31b292f16325172630b8cb3b596cbb0f2d8861f581fce4ca75788011a', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-27 01:18:27.118843 | orchestrator | skipping: [testbed-node-5] => (item={'id': '02e639db032f3978cbf46679cb8958e4661c92479247c0f4d94b1d5617e298ac', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-27 01:18:27.118848 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb3cb7a59f716320e6ec1fda240ea796e9c3040e06da6366040a74e20fe7a6c7', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-27 01:18:27.118853 | orchestrator | skipping: [testbed-node-5] => (item={'id': '94f0b9f64003c704bc6a79a4df02636875be63a8f6edba21da3ad714d57c8e84', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-27 01:18:27.118857 | orchestrator | skipping: [testbed-node-5] => (item={'id': '30d3bcfd07656c1be9b8e49691276eec148abb630025300d831cdf12750b91f9', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-03-27 01:18:27.118862 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'de38e760f9f7c8ff742dd4db92a805d9c72a74305f706615d570b76a5446cdc5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-03-27 01:18:27.118867 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f0d3a256b3aa5c9d90d187e93b7281078c1666a49c0d82f53c7adc5670390242', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-27 01:18:27.118872 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f04e6e0b11f2b3af51d571cdfb65349956c86b4dae586cee4c5936c63c9b5d2e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-27 01:18:27.118886 | orchestrator | ok: [testbed-node-5] => (item={'id': '6ede98aec12e69d25f9a9e973d4d40e491aadd8a0e398570bd60d9a5fe4269e3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-27 01:18:27.118893 | orchestrator | ok: [testbed-node-5] => (item={'id': '7974711e59e51ae39ab5d99b17ca0b66039d42698b8fd705f63b4ff8fe56b3df', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-27 01:18:27.118898 | orchestrator | skipping: [testbed-node-5] => (item={'id': '37e91e4dad2a999375933124c99de931e12c2498e3cea0e68bf5c3bf1f2f8ab3', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2026-03-27 01:18:27.118903 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a5c0a1cce0fa9d08810387e246ac4fa8dfb6e0d2b008562192a70a4ad336b0d1', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-27 01:18:27.118908 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b4508b3f06611d71faaea1d22969c70f06bc73d9acc0698490928a661a57fedb', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-27 01:18:27.118915 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2074a3f06ab86bc3a8b36cebeccd8a5a862c2e6b01bfe11294eb27d4e4510957', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-27 01:18:27.118927 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'adf5f7d15c103347cb2358639ecf9416b198794f844632e0778597df5a5237e3', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-27 01:18:27.118936 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd7bcfb9b748441b93d8543d81d11555e8369513813e2339522f1356ca8d21099', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-27 01:18:40.167075 | orchestrator | 2026-03-27 01:18:40.167176 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-27 01:18:40.167191 | orchestrator | Friday 27 March 2026 01:18:27 +0000 (0:00:00.645) 0:00:04.952 ********** 2026-03-27 01:18:40.167198 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167207 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167214 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167220 | orchestrator | 2026-03-27 01:18:40.167227 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-27 01:18:40.167231 | orchestrator | Friday 27 March 2026 01:18:27 +0000 (0:00:00.306) 0:00:05.258 ********** 2026-03-27 01:18:40.167235 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167240 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:18:40.167244 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:18:40.167248 | orchestrator | 2026-03-27 01:18:40.167252 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-27 01:18:40.167256 | orchestrator | Friday 27 March 2026 01:18:27 +0000 (0:00:00.277) 0:00:05.536 ********** 2026-03-27 01:18:40.167260 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167264 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167267 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167271 | orchestrator | 2026-03-27 01:18:40.167275 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-27 01:18:40.167279 | orchestrator | Friday 27 March 2026 01:18:28 +0000 (0:00:00.300) 0:00:05.836 ********** 2026-03-27 01:18:40.167282 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167286 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167290 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167294 | orchestrator | 2026-03-27 01:18:40.167297 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-27 01:18:40.167302 | orchestrator | Friday 27 March 2026 01:18:28 +0000 (0:00:00.419) 0:00:06.255 ********** 2026-03-27 01:18:40.167306 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-27 01:18:40.167311 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-27 01:18:40.167314 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167318 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-27 01:18:40.167322 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-27 01:18:40.167326 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:18:40.167329 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-27 01:18:40.167333 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-27 01:18:40.167337 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:18:40.167341 | orchestrator | 2026-03-27 01:18:40.167344 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-27 01:18:40.167348 | orchestrator | Friday 27 March 2026 01:18:29 +0000 (0:00:00.323) 0:00:06.578 ********** 2026-03-27 01:18:40.167352 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167356 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167376 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167381 | orchestrator | 2026-03-27 01:18:40.167388 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-27 01:18:40.167394 | orchestrator | Friday 27 March 2026 01:18:29 +0000 (0:00:00.284) 0:00:06.862 ********** 2026-03-27 01:18:40.167399 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167405 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:18:40.167411 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:18:40.167416 | orchestrator | 2026-03-27 01:18:40.167422 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-27 01:18:40.167428 | orchestrator | Friday 27 March 2026 01:18:29 +0000 (0:00:00.286) 0:00:07.149 ********** 2026-03-27 01:18:40.167434 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167440 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:18:40.167446 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:18:40.167453 | orchestrator | 2026-03-27 01:18:40.167459 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-27 01:18:40.167520 | orchestrator | Friday 27 March 2026 01:18:30 +0000 (0:00:00.472) 0:00:07.621 ********** 2026-03-27 01:18:40.167525 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167530 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167534 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167537 | orchestrator | 2026-03-27 01:18:40.167542 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-27 01:18:40.167546 | orchestrator | Friday 27 March 2026 01:18:30 +0000 (0:00:00.339) 0:00:07.960 ********** 2026-03-27 01:18:40.167549 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167553 | orchestrator | 2026-03-27 01:18:40.167557 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-27 01:18:40.167561 | orchestrator | Friday 27 March 2026 01:18:30 +0000 (0:00:00.268) 0:00:08.228 ********** 2026-03-27 01:18:40.167576 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167580 | orchestrator | 2026-03-27 01:18:40.167584 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-27 01:18:40.167588 | orchestrator | Friday 27 March 2026 01:18:30 +0000 (0:00:00.252) 0:00:08.481 ********** 2026-03-27 01:18:40.167592 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167595 | orchestrator | 2026-03-27 01:18:40.167599 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:40.167603 | orchestrator | Friday 27 March 2026 01:18:31 +0000 (0:00:00.238) 0:00:08.719 ********** 2026-03-27 01:18:40.167607 | orchestrator | 2026-03-27 01:18:40.167611 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:40.167614 | orchestrator | Friday 27 March 2026 01:18:31 +0000 (0:00:00.071) 0:00:08.791 ********** 2026-03-27 01:18:40.167618 | orchestrator | 2026-03-27 01:18:40.167622 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:40.167640 | orchestrator | Friday 27 March 2026 01:18:31 +0000 (0:00:00.068) 0:00:08.859 ********** 2026-03-27 01:18:40.167644 | orchestrator | 2026-03-27 01:18:40.167648 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-27 01:18:40.167652 | orchestrator | Friday 27 March 2026 01:18:31 +0000 (0:00:00.067) 0:00:08.927 ********** 2026-03-27 01:18:40.167655 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167659 | orchestrator | 2026-03-27 01:18:40.167663 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-27 01:18:40.167667 | orchestrator | Friday 27 March 2026 01:18:31 +0000 (0:00:00.614) 0:00:09.541 ********** 2026-03-27 01:18:40.167670 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167674 | orchestrator | 2026-03-27 01:18:40.167678 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-27 01:18:40.167681 | orchestrator | Friday 27 March 2026 01:18:32 +0000 (0:00:00.244) 0:00:09.785 ********** 2026-03-27 01:18:40.167685 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167689 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167698 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167702 | orchestrator | 2026-03-27 01:18:40.167705 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-27 01:18:40.167720 | orchestrator | Friday 27 March 2026 01:18:32 +0000 (0:00:00.291) 0:00:10.077 ********** 2026-03-27 01:18:40.167724 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167728 | orchestrator | 2026-03-27 01:18:40.167737 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-27 01:18:40.167741 | orchestrator | Friday 27 March 2026 01:18:32 +0000 (0:00:00.229) 0:00:10.306 ********** 2026-03-27 01:18:40.167745 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-27 01:18:40.167749 | orchestrator | 2026-03-27 01:18:40.167753 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-27 01:18:40.167757 | orchestrator | Friday 27 March 2026 01:18:34 +0000 (0:00:02.236) 0:00:12.543 ********** 2026-03-27 01:18:40.167760 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167764 | orchestrator | 2026-03-27 01:18:40.167768 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-27 01:18:40.167772 | orchestrator | Friday 27 March 2026 01:18:35 +0000 (0:00:00.118) 0:00:12.662 ********** 2026-03-27 01:18:40.167775 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167779 | orchestrator | 2026-03-27 01:18:40.167783 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-27 01:18:40.167787 | orchestrator | Friday 27 March 2026 01:18:35 +0000 (0:00:00.304) 0:00:12.966 ********** 2026-03-27 01:18:40.167790 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167794 | orchestrator | 2026-03-27 01:18:40.167798 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-27 01:18:40.167802 | orchestrator | Friday 27 March 2026 01:18:35 +0000 (0:00:00.118) 0:00:13.085 ********** 2026-03-27 01:18:40.167806 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167811 | orchestrator | 2026-03-27 01:18:40.167815 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-27 01:18:40.167819 | orchestrator | Friday 27 March 2026 01:18:35 +0000 (0:00:00.119) 0:00:13.204 ********** 2026-03-27 01:18:40.167824 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167828 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167832 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167836 | orchestrator | 2026-03-27 01:18:40.167841 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-27 01:18:40.167845 | orchestrator | Friday 27 March 2026 01:18:36 +0000 (0:00:00.460) 0:00:13.664 ********** 2026-03-27 01:18:40.167849 | orchestrator | changed: [testbed-node-3] 2026-03-27 01:18:40.167854 | orchestrator | changed: [testbed-node-4] 2026-03-27 01:18:40.167858 | orchestrator | changed: [testbed-node-5] 2026-03-27 01:18:40.167862 | orchestrator | 2026-03-27 01:18:40.167867 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-27 01:18:40.167871 | orchestrator | Friday 27 March 2026 01:18:37 +0000 (0:00:01.802) 0:00:15.467 ********** 2026-03-27 01:18:40.167875 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167879 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167883 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167886 | orchestrator | 2026-03-27 01:18:40.167890 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-27 01:18:40.167894 | orchestrator | Friday 27 March 2026 01:18:38 +0000 (0:00:00.327) 0:00:15.795 ********** 2026-03-27 01:18:40.167897 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167907 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167911 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167915 | orchestrator | 2026-03-27 01:18:40.167918 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-27 01:18:40.167922 | orchestrator | Friday 27 March 2026 01:18:38 +0000 (0:00:00.458) 0:00:16.253 ********** 2026-03-27 01:18:40.167926 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.167930 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:18:40.167944 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:18:40.167948 | orchestrator | 2026-03-27 01:18:40.167952 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-27 01:18:40.167955 | orchestrator | Friday 27 March 2026 01:18:39 +0000 (0:00:00.448) 0:00:16.702 ********** 2026-03-27 01:18:40.167963 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:40.167966 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:40.167971 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:40.167977 | orchestrator | 2026-03-27 01:18:40.167983 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-27 01:18:40.167989 | orchestrator | Friday 27 March 2026 01:18:39 +0000 (0:00:00.289) 0:00:16.992 ********** 2026-03-27 01:18:40.167994 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.168001 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:18:40.168007 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:18:40.168012 | orchestrator | 2026-03-27 01:18:40.168018 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-27 01:18:40.168023 | orchestrator | Friday 27 March 2026 01:18:39 +0000 (0:00:00.276) 0:00:17.269 ********** 2026-03-27 01:18:40.168029 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:40.168035 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:18:40.168040 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:18:40.168046 | orchestrator | 2026-03-27 01:18:40.168058 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-27 01:18:47.393984 | orchestrator | Friday 27 March 2026 01:18:40 +0000 (0:00:00.463) 0:00:17.732 ********** 2026-03-27 01:18:47.394106 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:47.394115 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:47.394122 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:47.394128 | orchestrator | 2026-03-27 01:18:47.394138 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-27 01:18:47.394149 | orchestrator | Friday 27 March 2026 01:18:40 +0000 (0:00:00.552) 0:00:18.285 ********** 2026-03-27 01:18:47.394156 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:47.394163 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:47.394170 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:47.394176 | orchestrator | 2026-03-27 01:18:47.394183 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-27 01:18:47.394190 | orchestrator | Friday 27 March 2026 01:18:41 +0000 (0:00:00.497) 0:00:18.782 ********** 2026-03-27 01:18:47.394197 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:47.394203 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:47.394210 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:47.394216 | orchestrator | 2026-03-27 01:18:47.394223 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-27 01:18:47.394230 | orchestrator | Friday 27 March 2026 01:18:41 +0000 (0:00:00.304) 0:00:19.087 ********** 2026-03-27 01:18:47.394238 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:47.394245 | orchestrator | skipping: [testbed-node-4] 2026-03-27 01:18:47.394252 | orchestrator | skipping: [testbed-node-5] 2026-03-27 01:18:47.394258 | orchestrator | 2026-03-27 01:18:47.394265 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-27 01:18:47.394272 | orchestrator | Friday 27 March 2026 01:18:42 +0000 (0:00:00.495) 0:00:19.582 ********** 2026-03-27 01:18:47.394279 | orchestrator | ok: [testbed-node-3] 2026-03-27 01:18:47.394286 | orchestrator | ok: [testbed-node-4] 2026-03-27 01:18:47.394293 | orchestrator | ok: [testbed-node-5] 2026-03-27 01:18:47.394300 | orchestrator | 2026-03-27 01:18:47.394307 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-27 01:18:47.394315 | orchestrator | Friday 27 March 2026 01:18:42 +0000 (0:00:00.319) 0:00:19.901 ********** 2026-03-27 01:18:47.394322 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:47.394329 | orchestrator | 2026-03-27 01:18:47.394336 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-27 01:18:47.394365 | orchestrator | Friday 27 March 2026 01:18:42 +0000 (0:00:00.252) 0:00:20.154 ********** 2026-03-27 01:18:47.394372 | orchestrator | skipping: [testbed-node-3] 2026-03-27 01:18:47.394378 | orchestrator | 2026-03-27 01:18:47.394385 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-27 01:18:47.394392 | orchestrator | Friday 27 March 2026 01:18:42 +0000 (0:00:00.245) 0:00:20.399 ********** 2026-03-27 01:18:47.394399 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:47.394406 | orchestrator | 2026-03-27 01:18:47.394413 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-27 01:18:47.394421 | orchestrator | Friday 27 March 2026 01:18:44 +0000 (0:00:01.686) 0:00:22.085 ********** 2026-03-27 01:18:47.394428 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:47.394435 | orchestrator | 2026-03-27 01:18:47.394442 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-27 01:18:47.394449 | orchestrator | Friday 27 March 2026 01:18:44 +0000 (0:00:00.287) 0:00:22.373 ********** 2026-03-27 01:18:47.394505 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:47.394513 | orchestrator | 2026-03-27 01:18:47.394519 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:47.394526 | orchestrator | Friday 27 March 2026 01:18:45 +0000 (0:00:00.268) 0:00:22.641 ********** 2026-03-27 01:18:47.394532 | orchestrator | 2026-03-27 01:18:47.394538 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:47.394545 | orchestrator | Friday 27 March 2026 01:18:45 +0000 (0:00:00.067) 0:00:22.708 ********** 2026-03-27 01:18:47.394552 | orchestrator | 2026-03-27 01:18:47.394558 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-27 01:18:47.394562 | orchestrator | Friday 27 March 2026 01:18:45 +0000 (0:00:00.229) 0:00:22.937 ********** 2026-03-27 01:18:47.394567 | orchestrator | 2026-03-27 01:18:47.394572 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-27 01:18:47.394576 | orchestrator | Friday 27 March 2026 01:18:45 +0000 (0:00:00.073) 0:00:23.011 ********** 2026-03-27 01:18:47.394583 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-27 01:18:47.394589 | orchestrator | 2026-03-27 01:18:47.394596 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-27 01:18:47.394602 | orchestrator | Friday 27 March 2026 01:18:46 +0000 (0:00:01.282) 0:00:24.293 ********** 2026-03-27 01:18:47.394609 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-27 01:18:47.394616 | orchestrator |  "msg": [ 2026-03-27 01:18:47.394623 | orchestrator |  "Validator run completed.", 2026-03-27 01:18:47.394631 | orchestrator |  "You can find the report file here:", 2026-03-27 01:18:47.394638 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-27T01:18:23+00:00-report.json", 2026-03-27 01:18:47.394646 | orchestrator |  "on the following host:", 2026-03-27 01:18:47.394653 | orchestrator |  "testbed-manager" 2026-03-27 01:18:47.394660 | orchestrator |  ] 2026-03-27 01:18:47.394667 | orchestrator | } 2026-03-27 01:18:47.394675 | orchestrator | 2026-03-27 01:18:47.394682 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:18:47.394689 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-27 01:18:47.394697 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 01:18:47.394720 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-27 01:18:47.394728 | orchestrator | 2026-03-27 01:18:47.394734 | orchestrator | 2026-03-27 01:18:47.394741 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:18:47.394802 | orchestrator | Friday 27 March 2026 01:18:47 +0000 (0:00:00.386) 0:00:24.680 ********** 2026-03-27 01:18:47.394810 | orchestrator | =============================================================================== 2026-03-27 01:18:47.394817 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.24s 2026-03-27 01:18:47.394824 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.80s 2026-03-27 01:18:47.394830 | orchestrator | Aggregate test results step one ----------------------------------------- 1.69s 2026-03-27 01:18:47.394837 | orchestrator | Write report file ------------------------------------------------------- 1.28s 2026-03-27 01:18:47.394844 | orchestrator | Get timestamp for report file ------------------------------------------- 1.02s 2026-03-27 01:18:47.394851 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-03-27 01:18:47.394857 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.65s 2026-03-27 01:18:47.394864 | orchestrator | Print report file information ------------------------------------------- 0.61s 2026-03-27 01:18:47.394870 | orchestrator | Prepare test data ------------------------------------------------------- 0.55s 2026-03-27 01:18:47.394877 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.50s 2026-03-27 01:18:47.394883 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.50s 2026-03-27 01:18:47.394890 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.47s 2026-03-27 01:18:47.394896 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.46s 2026-03-27 01:18:47.394903 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2026-03-27 01:18:47.394909 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.46s 2026-03-27 01:18:47.394915 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.45s 2026-03-27 01:18:47.394921 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.45s 2026-03-27 01:18:47.394927 | orchestrator | Prepare test data ------------------------------------------------------- 0.42s 2026-03-27 01:18:47.394934 | orchestrator | Print report file information ------------------------------------------- 0.39s 2026-03-27 01:18:47.394940 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2026-03-27 01:18:47.584749 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-27 01:18:47.593212 | orchestrator | + set -e 2026-03-27 01:18:47.593279 | orchestrator | + source /opt/manager-vars.sh 2026-03-27 01:18:47.593286 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-27 01:18:47.593290 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-27 01:18:47.593295 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-27 01:18:47.593299 | orchestrator | ++ CEPH_VERSION=reef 2026-03-27 01:18:47.593303 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-27 01:18:47.593308 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-27 01:18:47.593312 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-27 01:18:47.593316 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-27 01:18:47.593321 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-27 01:18:47.593324 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-27 01:18:47.593328 | orchestrator | ++ export ARA=false 2026-03-27 01:18:47.593332 | orchestrator | ++ ARA=false 2026-03-27 01:18:47.593336 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-27 01:18:47.593341 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-27 01:18:47.593344 | orchestrator | ++ export TEMPEST=true 2026-03-27 01:18:47.593348 | orchestrator | ++ TEMPEST=true 2026-03-27 01:18:47.593352 | orchestrator | ++ export IS_ZUUL=true 2026-03-27 01:18:47.593356 | orchestrator | ++ IS_ZUUL=true 2026-03-27 01:18:47.593360 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 01:18:47.593364 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 01:18:47.593367 | orchestrator | ++ export EXTERNAL_API=false 2026-03-27 01:18:47.593371 | orchestrator | ++ EXTERNAL_API=false 2026-03-27 01:18:47.593375 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-27 01:18:47.593378 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-27 01:18:47.593382 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-27 01:18:47.593386 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-27 01:18:47.593390 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-27 01:18:47.593412 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-27 01:18:47.593416 | orchestrator | + source /etc/os-release 2026-03-27 01:18:47.593419 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-27 01:18:47.593423 | orchestrator | ++ NAME=Ubuntu 2026-03-27 01:18:47.593427 | orchestrator | ++ VERSION_ID=24.04 2026-03-27 01:18:47.593431 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-27 01:18:47.593434 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-27 01:18:47.593438 | orchestrator | ++ ID=ubuntu 2026-03-27 01:18:47.593442 | orchestrator | ++ ID_LIKE=debian 2026-03-27 01:18:47.593446 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-27 01:18:47.593450 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-27 01:18:47.593498 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-27 01:18:47.593503 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-27 01:18:47.593507 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-27 01:18:47.593511 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-27 01:18:47.593656 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-27 01:18:47.593678 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-27 01:18:47.593684 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-27 01:18:47.616796 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-27 01:19:10.623675 | orchestrator | 2026-03-27 01:19:10.623756 | orchestrator | # Status of Elasticsearch 2026-03-27 01:19:10.623763 | orchestrator | 2026-03-27 01:19:10.623768 | orchestrator | + pushd /opt/configuration/contrib 2026-03-27 01:19:10.623773 | orchestrator | + echo 2026-03-27 01:19:10.623778 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-27 01:19:10.623782 | orchestrator | + echo 2026-03-27 01:19:10.623786 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-27 01:19:10.789589 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-27 01:19:10.789679 | orchestrator | 2026-03-27 01:19:10.789689 | orchestrator | # Status of MariaDB 2026-03-27 01:19:10.789694 | orchestrator | 2026-03-27 01:19:10.789699 | orchestrator | + echo 2026-03-27 01:19:10.789703 | orchestrator | + echo '# Status of MariaDB' 2026-03-27 01:19:10.789707 | orchestrator | + echo 2026-03-27 01:19:10.790120 | orchestrator | ++ semver latest 10.0.0-0 2026-03-27 01:19:10.846175 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-27 01:19:10.846266 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-27 01:19:10.846277 | orchestrator | + osism status database 2026-03-27 01:19:12.407705 | orchestrator | 2026-03-27 01:19:12 | ERROR  | Unable to get ansible vault password 2026-03-27 01:19:12.407804 | orchestrator | 2026-03-27 01:19:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:19:12.407814 | orchestrator | 2026-03-27 01:19:12 | ERROR  | Dropping encrypted entries 2026-03-27 01:19:12.440767 | orchestrator | 2026-03-27 01:19:12 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-03-27 01:19:12.451063 | orchestrator | 2026-03-27 01:19:12 | INFO  | Cluster Status: Primary 2026-03-27 01:19:12.451149 | orchestrator | 2026-03-27 01:19:12 | INFO  | Connected: ON 2026-03-27 01:19:12.451158 | orchestrator | 2026-03-27 01:19:12 | INFO  | Ready: ON 2026-03-27 01:19:12.451164 | orchestrator | 2026-03-27 01:19:12 | INFO  | Cluster Size: 3 2026-03-27 01:19:12.451171 | orchestrator | 2026-03-27 01:19:12 | INFO  | Local State: Synced 2026-03-27 01:19:12.451178 | orchestrator | 2026-03-27 01:19:12 | INFO  | Cluster State UUID: 995fc4fc-2977-11f1-a4fc-3358ffe79555 2026-03-27 01:19:12.451186 | orchestrator | 2026-03-27 01:19:12 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-03-27 01:19:12.451222 | orchestrator | 2026-03-27 01:19:12 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-03-27 01:19:12.451240 | orchestrator | 2026-03-27 01:19:12 | INFO  | Local Node UUID: cba29062-2977-11f1-9102-ce7e80cefc48 2026-03-27 01:19:12.451248 | orchestrator | 2026-03-27 01:19:12 | INFO  | Flow Control Paused: 0.00% 2026-03-27 01:19:12.451253 | orchestrator | 2026-03-27 01:19:12 | INFO  | Recv Queue Avg: 0 2026-03-27 01:19:12.451257 | orchestrator | 2026-03-27 01:19:12 | INFO  | Send Queue Avg: 0.000585309 2026-03-27 01:19:12.451261 | orchestrator | 2026-03-27 01:19:12 | INFO  | Transactions: 4591 local commits, 6776 replicated, 102 received 2026-03-27 01:19:12.451265 | orchestrator | 2026-03-27 01:19:12 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-03-27 01:19:12.451269 | orchestrator | 2026-03-27 01:19:12 | INFO  | MariaDB Uptime: 22 minutes, 41 seconds 2026-03-27 01:19:12.451273 | orchestrator | 2026-03-27 01:19:12 | INFO  | Threads: 128 connected, 1 running 2026-03-27 01:19:12.451837 | orchestrator | 2026-03-27 01:19:12 | INFO  | Queries: 239348 total, 0 slow 2026-03-27 01:19:12.451907 | orchestrator | 2026-03-27 01:19:12 | INFO  | Aborted Connects: 125 2026-03-27 01:19:12.451918 | orchestrator | 2026-03-27 01:19:12 | INFO  | MariaDB Galera Cluster validation PASSED 2026-03-27 01:19:12.676708 | orchestrator | 2026-03-27 01:19:12.676774 | orchestrator | # Status of Prometheus 2026-03-27 01:19:12.676781 | orchestrator | 2026-03-27 01:19:12.676786 | orchestrator | + echo 2026-03-27 01:19:12.676790 | orchestrator | + echo '# Status of Prometheus' 2026-03-27 01:19:12.676794 | orchestrator | + echo 2026-03-27 01:19:12.676798 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-27 01:19:12.726507 | orchestrator | Unauthorized 2026-03-27 01:19:12.729749 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-27 01:19:12.794360 | orchestrator | Unauthorized 2026-03-27 01:19:12.796431 | orchestrator | 2026-03-27 01:19:12.796486 | orchestrator | # Status of RabbitMQ 2026-03-27 01:19:12.796496 | orchestrator | 2026-03-27 01:19:12.796502 | orchestrator | + echo 2026-03-27 01:19:12.796508 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-27 01:19:12.796514 | orchestrator | + echo 2026-03-27 01:19:12.797512 | orchestrator | ++ semver latest 10.0.0-0 2026-03-27 01:19:12.858242 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-27 01:19:12.858297 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-27 01:19:12.858304 | orchestrator | + osism status messaging 2026-03-27 01:19:20.163295 | orchestrator | 2026-03-27 01:19:20 | ERROR  | Unable to get ansible vault password 2026-03-27 01:19:20.163354 | orchestrator | 2026-03-27 01:19:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:19:20.163362 | orchestrator | 2026-03-27 01:19:20 | ERROR  | Dropping encrypted entries 2026-03-27 01:19:20.197668 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-03-27 01:19:20.250694 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-03-27 01:19:20.250741 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-03-27 01:19:20.250746 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-03-27 01:19:20.250751 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Cluster Size: 3 2026-03-27 01:19:20.250827 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-27 01:19:20.250835 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-27 01:19:20.250839 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-03-27 01:19:20.250844 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Connections: 207, Channels: 206, Queues: 173 2026-03-27 01:19:20.250859 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Messages: 218 total, 218 ready, 0 unacked 2026-03-27 01:19:20.250908 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Message Rates: 10.2/s publish, 11.6/s deliver 2026-03-27 01:19:20.250959 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Disk Free: 58.0 GB (limit: 0.0 GB) 2026-03-27 01:19:20.251583 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-27 01:19:20.251612 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] File Descriptors: 122/1024 2026-03-27 01:19:20.251618 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-0] Sockets: 76/832 2026-03-27 01:19:20.251625 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-03-27 01:19:20.299618 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-03-27 01:19:20.299678 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-03-27 01:19:20.299687 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-03-27 01:19:20.299694 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Cluster Size: 3 2026-03-27 01:19:20.299701 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-27 01:19:20.299837 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-27 01:19:20.299883 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-03-27 01:19:20.299893 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Connections: 207, Channels: 206, Queues: 173 2026-03-27 01:19:20.299900 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Messages: 218 total, 218 ready, 0 unacked 2026-03-27 01:19:20.299906 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Message Rates: 10.2/s publish, 11.6/s deliver 2026-03-27 01:19:20.299913 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Disk Free: 58.4 GB (limit: 0.0 GB) 2026-03-27 01:19:20.300077 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-03-27 01:19:20.300509 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] File Descriptors: 113/1024 2026-03-27 01:19:20.300544 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-1] Sockets: 65/832 2026-03-27 01:19:20.300551 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-03-27 01:19:20.344523 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-03-27 01:19:20.344580 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-03-27 01:19:20.344588 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-03-27 01:19:20.344595 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Cluster Size: 3 2026-03-27 01:19:20.344986 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-27 01:19:20.345012 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-27 01:19:20.345033 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-03-27 01:19:20.345162 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Connections: 207, Channels: 206, Queues: 173 2026-03-27 01:19:20.345175 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Messages: 218 total, 218 ready, 0 unacked 2026-03-27 01:19:20.345191 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Message Rates: 10.2/s publish, 11.6/s deliver 2026-03-27 01:19:20.345359 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-03-27 01:19:20.345522 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-27 01:19:20.345681 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] File Descriptors: 112/1024 2026-03-27 01:19:20.346044 | orchestrator | 2026-03-27 01:19:20 | INFO  | [testbed-node-2] Sockets: 66/832 2026-03-27 01:19:20.346288 | orchestrator | 2026-03-27 01:19:20 | INFO  | RabbitMQ Cluster validation PASSED 2026-03-27 01:19:20.641307 | orchestrator | 2026-03-27 01:19:20.641366 | orchestrator | # Status of Redis 2026-03-27 01:19:20.641377 | orchestrator | 2026-03-27 01:19:20.641384 | orchestrator | + echo 2026-03-27 01:19:20.641391 | orchestrator | + echo '# Status of Redis' 2026-03-27 01:19:20.641399 | orchestrator | + echo 2026-03-27 01:19:20.641473 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-27 01:19:20.645216 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001044s;;;0.000000;10.000000 2026-03-27 01:19:20.646159 | orchestrator | 2026-03-27 01:19:20.646197 | orchestrator | # Create backup of MariaDB database 2026-03-27 01:19:20.646203 | orchestrator | 2026-03-27 01:19:20.646208 | orchestrator | + popd 2026-03-27 01:19:20.646212 | orchestrator | + echo 2026-03-27 01:19:20.646215 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-27 01:19:20.646220 | orchestrator | + echo 2026-03-27 01:19:20.646224 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-27 01:19:21.914183 | orchestrator | 2026-03-27 01:19:21 | INFO  | Prepare task for execution of mariadb_backup. 2026-03-27 01:19:21.977013 | orchestrator | 2026-03-27 01:19:21 | INFO  | Task 80aecfe7-372b-454f-80c4-9c1dc5081710 (mariadb_backup) was prepared for execution. 2026-03-27 01:19:21.977102 | orchestrator | 2026-03-27 01:19:21 | INFO  | It takes a moment until task 80aecfe7-372b-454f-80c4-9c1dc5081710 (mariadb_backup) has been started and output is visible here. 2026-03-27 01:22:20.271205 | orchestrator | 2026-03-27 01:22:20.271317 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-27 01:22:20.271331 | orchestrator | 2026-03-27 01:22:20.271338 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-27 01:22:20.271345 | orchestrator | Friday 27 March 2026 01:19:25 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-03-27 01:22:20.271353 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:22:20.271361 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:22:20.271368 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:22:20.271374 | orchestrator | 2026-03-27 01:22:20.271381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-27 01:22:20.271388 | orchestrator | Friday 27 March 2026 01:19:25 +0000 (0:00:00.302) 0:00:00.540 ********** 2026-03-27 01:22:20.271395 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-27 01:22:20.271402 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-27 01:22:20.271409 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-27 01:22:20.271416 | orchestrator | 2026-03-27 01:22:20.271424 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-27 01:22:20.271450 | orchestrator | 2026-03-27 01:22:20.271455 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-27 01:22:20.271459 | orchestrator | Friday 27 March 2026 01:19:25 +0000 (0:00:00.396) 0:00:00.937 ********** 2026-03-27 01:22:20.271463 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-27 01:22:20.271467 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-27 01:22:20.271471 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-27 01:22:20.271475 | orchestrator | 2026-03-27 01:22:20.271479 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-27 01:22:20.271483 | orchestrator | Friday 27 March 2026 01:19:26 +0000 (0:00:00.408) 0:00:01.345 ********** 2026-03-27 01:22:20.271488 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-27 01:22:20.271493 | orchestrator | 2026-03-27 01:22:20.271497 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-27 01:22:20.271501 | orchestrator | Friday 27 March 2026 01:19:26 +0000 (0:00:00.630) 0:00:01.976 ********** 2026-03-27 01:22:20.271505 | orchestrator | ok: [testbed-node-1] 2026-03-27 01:22:20.271508 | orchestrator | ok: [testbed-node-2] 2026-03-27 01:22:20.271512 | orchestrator | ok: [testbed-node-0] 2026-03-27 01:22:20.271516 | orchestrator | 2026-03-27 01:22:20.271520 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-27 01:22:20.271524 | orchestrator | Friday 27 March 2026 01:19:30 +0000 (0:00:03.368) 0:00:05.345 ********** 2026-03-27 01:22:20.271528 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:22:20.271533 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:22:20.271536 | orchestrator | 2026-03-27 01:22:20.271540 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-27 01:22:20.271544 | orchestrator | changed: [testbed-node-0] 2026-03-27 01:22:20.271549 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-27 01:22:20.271553 | orchestrator | 2026-03-27 01:22:20.271557 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-27 01:22:20.271560 | orchestrator | skipping: no hosts matched 2026-03-27 01:22:20.271564 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-27 01:22:20.271568 | orchestrator | 2026-03-27 01:22:20.271572 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-27 01:22:20.271576 | orchestrator | skipping: no hosts matched 2026-03-27 01:22:20.271580 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-27 01:22:20.271584 | orchestrator | mariadb_bootstrap_restart 2026-03-27 01:22:20.271588 | orchestrator | 2026-03-27 01:22:20.271592 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-27 01:22:20.271595 | orchestrator | skipping: no hosts matched 2026-03-27 01:22:20.271599 | orchestrator | 2026-03-27 01:22:20.271603 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-27 01:22:20.271607 | orchestrator | 2026-03-27 01:22:20.271610 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-27 01:22:20.271630 | orchestrator | Friday 27 March 2026 01:22:19 +0000 (0:02:49.239) 0:02:54.585 ********** 2026-03-27 01:22:20.271636 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:22:20.271642 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:22:20.271648 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:22:20.271654 | orchestrator | 2026-03-27 01:22:20.271660 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-27 01:22:20.271665 | orchestrator | Friday 27 March 2026 01:22:19 +0000 (0:00:00.277) 0:02:54.863 ********** 2026-03-27 01:22:20.271671 | orchestrator | skipping: [testbed-node-0] 2026-03-27 01:22:20.271677 | orchestrator | skipping: [testbed-node-1] 2026-03-27 01:22:20.271683 | orchestrator | skipping: [testbed-node-2] 2026-03-27 01:22:20.271690 | orchestrator | 2026-03-27 01:22:20.271696 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:22:20.271710 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-27 01:22:20.271719 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 01:22:20.271727 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 01:22:20.271734 | orchestrator | 2026-03-27 01:22:20.271739 | orchestrator | 2026-03-27 01:22:20.271744 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:22:20.271749 | orchestrator | Friday 27 March 2026 01:22:19 +0000 (0:00:00.210) 0:02:55.073 ********** 2026-03-27 01:22:20.271753 | orchestrator | =============================================================================== 2026-03-27 01:22:20.271771 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 169.24s 2026-03-27 01:22:20.271776 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.37s 2026-03-27 01:22:20.271781 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.63s 2026-03-27 01:22:20.271785 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2026-03-27 01:22:20.271789 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-03-27 01:22:20.271792 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-27 01:22:20.271796 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2026-03-27 01:22:20.271861 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2026-03-27 01:22:20.480071 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-27 01:22:20.488682 | orchestrator | + set -e 2026-03-27 01:22:20.488766 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-27 01:22:20.488778 | orchestrator | ++ export INTERACTIVE=false 2026-03-27 01:22:20.488787 | orchestrator | ++ INTERACTIVE=false 2026-03-27 01:22:20.488793 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-27 01:22:20.488800 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-27 01:22:20.488807 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-27 01:22:20.489886 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-27 01:22:20.494203 | orchestrator | 2026-03-27 01:22:20.494287 | orchestrator | # OpenStack endpoints 2026-03-27 01:22:20.494298 | orchestrator | 2026-03-27 01:22:20.494307 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-27 01:22:20.494315 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-27 01:22:20.494322 | orchestrator | + export OS_CLOUD=admin 2026-03-27 01:22:20.494329 | orchestrator | + OS_CLOUD=admin 2026-03-27 01:22:20.494336 | orchestrator | + echo 2026-03-27 01:22:20.494343 | orchestrator | + echo '# OpenStack endpoints' 2026-03-27 01:22:20.494350 | orchestrator | + echo 2026-03-27 01:22:20.494357 | orchestrator | + openstack endpoint list 2026-03-27 01:22:23.842658 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-27 01:22:23.842734 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-27 01:22:23.842741 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-27 01:22:23.842745 | orchestrator | | 0acc2df547204512a5298cc67894c243 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-27 01:22:23.842749 | orchestrator | | 1587412f3de74bbea7f27f25b135b3ff | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-27 01:22:23.842766 | orchestrator | | 1a0922d63ef648ab9cca7ecfb86fbd47 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-27 01:22:23.842785 | orchestrator | | 2224556d4a54419683377024545b1595 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-27 01:22:23.842789 | orchestrator | | 23fae4649f5e42dca7fd27194e2b0634 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-27 01:22:23.842793 | orchestrator | | 41abdf97cf624bb2adf2dafb372b6ed2 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-27 01:22:23.842797 | orchestrator | | 476bc532561549848ddf7e9383c377af | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-27 01:22:23.842801 | orchestrator | | 48321905b98b4a58b288ace329f9352a | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-27 01:22:23.842804 | orchestrator | | 4b25185125b649e48415cb74043ced38 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-27 01:22:23.842808 | orchestrator | | 561327279648419e8641187a82435304 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-27 01:22:23.842812 | orchestrator | | 63ce3a5197d84a8791decad88c900b41 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-27 01:22:23.842816 | orchestrator | | 8304a9b9a1fa4ff5906baebd1c2601ab | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-27 01:22:23.842819 | orchestrator | | 84c942f5faa34f2a81cfed4374c93c0b | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-27 01:22:23.842823 | orchestrator | | 86854aaaffeb4017a37ef204ea48938a | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-27 01:22:23.842827 | orchestrator | | 98625e457e354f5cbf6bcfce4c618f42 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-27 01:22:23.842839 | orchestrator | | 9b4572b9a5dc4bf9a3c4b75fe44751aa | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-27 01:22:23.842843 | orchestrator | | 9e09dc31e35e4961bb8ca1fa45528994 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-27 01:22:23.842852 | orchestrator | | acf8705360f044ddb11c538ff4c244d9 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-27 01:22:23.842856 | orchestrator | | b862e17e120c4ab4bc0bde85d348816d | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-27 01:22:23.842860 | orchestrator | | ca0ad308611444f58229e1d630014788 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-27 01:22:23.842873 | orchestrator | | cd984520c32a431e953c076c0606e82a | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-27 01:22:23.842877 | orchestrator | | ecdc06be959f4f9591d191cba8e49de7 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-27 01:22:23.842881 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-27 01:22:24.062847 | orchestrator | 2026-03-27 01:22:24.062920 | orchestrator | # Cinder 2026-03-27 01:22:24.062927 | orchestrator | 2026-03-27 01:22:24.062931 | orchestrator | + echo 2026-03-27 01:22:24.062936 | orchestrator | + echo '# Cinder' 2026-03-27 01:22:24.062940 | orchestrator | + echo 2026-03-27 01:22:24.062944 | orchestrator | + openstack volume service list 2026-03-27 01:22:26.652408 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-27 01:22:26.652473 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-27 01:22:26.652482 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-27 01:22:26.652500 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-27T01:22:19.000000 | 2026-03-27 01:22:26.652508 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-27T01:22:18.000000 | 2026-03-27 01:22:26.652514 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-27T01:22:19.000000 | 2026-03-27 01:22:26.652521 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-27T01:22:18.000000 | 2026-03-27 01:22:26.652528 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-27T01:22:22.000000 | 2026-03-27 01:22:26.652535 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-27T01:22:22.000000 | 2026-03-27 01:22:26.652542 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-27T01:22:24.000000 | 2026-03-27 01:22:26.652546 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-27T01:22:26.000000 | 2026-03-27 01:22:26.652550 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-27T01:22:26.000000 | 2026-03-27 01:22:26.652554 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-27 01:22:26.905252 | orchestrator | 2026-03-27 01:22:26.905313 | orchestrator | # Neutron 2026-03-27 01:22:26.905321 | orchestrator | 2026-03-27 01:22:26.905328 | orchestrator | + echo 2026-03-27 01:22:26.905335 | orchestrator | + echo '# Neutron' 2026-03-27 01:22:26.905343 | orchestrator | + echo 2026-03-27 01:22:26.905350 | orchestrator | + openstack network agent list 2026-03-27 01:22:29.699573 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-27 01:22:29.699673 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-27 01:22:29.699685 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-27 01:22:29.699692 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-27 01:22:29.699699 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-27 01:22:29.699706 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-27 01:22:29.699713 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-27 01:22:29.699720 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-27 01:22:29.699727 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-27 01:22:29.699736 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-27 01:22:29.699773 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-27 01:22:29.699780 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-27 01:22:29.699786 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-27 01:22:29.946522 | orchestrator | + openstack network service provider list 2026-03-27 01:22:32.545584 | orchestrator | +---------------+------+---------+ 2026-03-27 01:22:32.545684 | orchestrator | | Service Type | Name | Default | 2026-03-27 01:22:32.545693 | orchestrator | +---------------+------+---------+ 2026-03-27 01:22:32.545700 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-27 01:22:32.545706 | orchestrator | +---------------+------+---------+ 2026-03-27 01:22:32.820677 | orchestrator | 2026-03-27 01:22:32.820765 | orchestrator | # Nova 2026-03-27 01:22:32.820776 | orchestrator | 2026-03-27 01:22:32.820782 | orchestrator | + echo 2026-03-27 01:22:32.820789 | orchestrator | + echo '# Nova' 2026-03-27 01:22:32.820796 | orchestrator | + echo 2026-03-27 01:22:32.820800 | orchestrator | + openstack compute service list 2026-03-27 01:22:35.371904 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-27 01:22:35.371956 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-27 01:22:35.371965 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-27 01:22:35.371972 | orchestrator | | 67a7259d-e1ba-4024-84ca-a317f4519b09 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-27T01:22:34.000000 | 2026-03-27 01:22:35.371979 | orchestrator | | a4a8437f-1da3-4f30-bc4d-ef556f1ddfc2 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-27T01:22:34.000000 | 2026-03-27 01:22:35.371995 | orchestrator | | 9378a6d7-e7e6-4e2f-86d4-af6b45833438 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-27T01:22:34.000000 | 2026-03-27 01:22:35.372002 | orchestrator | | 8f8c780f-7520-405a-b56d-920042fc70b3 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-27T01:22:28.000000 | 2026-03-27 01:22:35.372008 | orchestrator | | 29cdc05c-c1eb-4fcc-811e-4e8ad40c2048 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-27T01:22:29.000000 | 2026-03-27 01:22:35.372015 | orchestrator | | 5f9f0546-e5ed-4879-922c-40d46c06f4a9 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-27T01:22:29.000000 | 2026-03-27 01:22:35.372021 | orchestrator | | 37ac7340-120b-4371-9621-ec76a11a176d | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-27T01:22:25.000000 | 2026-03-27 01:22:35.372028 | orchestrator | | 6d852bfc-e1fa-4857-bd1b-b445fe0b00ca | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-27T01:22:25.000000 | 2026-03-27 01:22:35.372035 | orchestrator | | 63d19ba3-2f16-48bc-841b-529f2319a86f | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-27T01:22:26.000000 | 2026-03-27 01:22:35.372041 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-27 01:22:35.614430 | orchestrator | + openstack hypervisor list 2026-03-27 01:22:38.586774 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-27 01:22:38.586828 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-27 01:22:38.586834 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-27 01:22:38.586838 | orchestrator | | 08503599-d146-4e73-9e64-19b12247702e | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-27 01:22:38.586841 | orchestrator | | fcb194ea-4333-4a38-a2c8-05836b108053 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-27 01:22:38.586865 | orchestrator | | 0438563e-0f16-4ee6-a40f-9426a79e6ed0 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-27 01:22:38.586870 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-27 01:22:38.831880 | orchestrator | 2026-03-27 01:22:38.831933 | orchestrator | # Run OpenStack test play 2026-03-27 01:22:38.831941 | orchestrator | 2026-03-27 01:22:38.831946 | orchestrator | + echo 2026-03-27 01:22:38.831951 | orchestrator | + echo '# Run OpenStack test play' 2026-03-27 01:22:38.831956 | orchestrator | + echo 2026-03-27 01:22:38.831961 | orchestrator | + osism apply --environment openstack test 2026-03-27 01:22:40.071877 | orchestrator | 2026-03-27 01:22:40 | INFO  | Trying to run play test in environment openstack 2026-03-27 01:22:40.099940 | orchestrator | 2026-03-27 01:22:40 | INFO  | Prepare task for execution of test. 2026-03-27 01:22:40.167816 | orchestrator | 2026-03-27 01:22:40 | INFO  | Task 1cc947a4-e0ff-4cc8-a950-f45bcfafd457 (test) was prepared for execution. 2026-03-27 01:22:40.167904 | orchestrator | 2026-03-27 01:22:40 | INFO  | It takes a moment until task 1cc947a4-e0ff-4cc8-a950-f45bcfafd457 (test) has been started and output is visible here. 2026-03-27 01:25:11.627522 | orchestrator | 2026-03-27 01:25:11.627631 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-27 01:25:11.627643 | orchestrator | 2026-03-27 01:25:11.627650 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-27 01:25:11.627657 | orchestrator | Friday 27 March 2026 01:22:43 +0000 (0:00:00.100) 0:00:00.100 ********** 2026-03-27 01:25:11.627664 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627673 | orchestrator | 2026-03-27 01:25:11.627679 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-27 01:25:11.627686 | orchestrator | Friday 27 March 2026 01:22:47 +0000 (0:00:03.750) 0:00:03.851 ********** 2026-03-27 01:25:11.627693 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627700 | orchestrator | 2026-03-27 01:25:11.627704 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-27 01:25:11.627708 | orchestrator | Friday 27 March 2026 01:22:51 +0000 (0:00:04.312) 0:00:08.163 ********** 2026-03-27 01:25:11.627712 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627716 | orchestrator | 2026-03-27 01:25:11.627720 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-27 01:25:11.627724 | orchestrator | Friday 27 March 2026 01:22:57 +0000 (0:00:06.634) 0:00:14.797 ********** 2026-03-27 01:25:11.627728 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627732 | orchestrator | 2026-03-27 01:25:11.627736 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-27 01:25:11.627740 | orchestrator | Friday 27 March 2026 01:23:02 +0000 (0:00:04.219) 0:00:19.017 ********** 2026-03-27 01:25:11.627744 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627748 | orchestrator | 2026-03-27 01:25:11.627751 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-27 01:25:11.627755 | orchestrator | Friday 27 March 2026 01:23:06 +0000 (0:00:04.262) 0:00:23.280 ********** 2026-03-27 01:25:11.627761 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-27 01:25:11.627768 | orchestrator | changed: [localhost] => (item=member) 2026-03-27 01:25:11.627775 | orchestrator | changed: [localhost] => (item=creator) 2026-03-27 01:25:11.627784 | orchestrator | 2026-03-27 01:25:11.627792 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-27 01:25:11.627801 | orchestrator | Friday 27 March 2026 01:23:17 +0000 (0:00:11.449) 0:00:34.729 ********** 2026-03-27 01:25:11.627807 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627813 | orchestrator | 2026-03-27 01:25:11.627819 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-27 01:25:11.627825 | orchestrator | Friday 27 March 2026 01:23:22 +0000 (0:00:04.269) 0:00:38.999 ********** 2026-03-27 01:25:11.627850 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627854 | orchestrator | 2026-03-27 01:25:11.627858 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-27 01:25:11.627864 | orchestrator | Friday 27 March 2026 01:23:26 +0000 (0:00:04.549) 0:00:43.548 ********** 2026-03-27 01:25:11.627871 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627877 | orchestrator | 2026-03-27 01:25:11.627882 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-27 01:25:11.627889 | orchestrator | Friday 27 March 2026 01:23:31 +0000 (0:00:04.364) 0:00:47.913 ********** 2026-03-27 01:25:11.627895 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627900 | orchestrator | 2026-03-27 01:25:11.627906 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-27 01:25:11.627934 | orchestrator | Friday 27 March 2026 01:23:35 +0000 (0:00:04.177) 0:00:52.091 ********** 2026-03-27 01:25:11.627948 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627953 | orchestrator | 2026-03-27 01:25:11.627960 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-27 01:25:11.627967 | orchestrator | Friday 27 March 2026 01:23:39 +0000 (0:00:03.992) 0:00:56.083 ********** 2026-03-27 01:25:11.627972 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.627978 | orchestrator | 2026-03-27 01:25:11.627984 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-27 01:25:11.627990 | orchestrator | Friday 27 March 2026 01:23:43 +0000 (0:00:04.062) 0:01:00.145 ********** 2026-03-27 01:25:11.627996 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.628002 | orchestrator | 2026-03-27 01:25:11.628008 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-27 01:25:11.628015 | orchestrator | Friday 27 March 2026 01:23:47 +0000 (0:00:04.570) 0:01:04.716 ********** 2026-03-27 01:25:11.628021 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.628029 | orchestrator | 2026-03-27 01:25:11.628033 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-27 01:25:11.628037 | orchestrator | Friday 27 March 2026 01:23:53 +0000 (0:00:05.137) 0:01:09.853 ********** 2026-03-27 01:25:11.628042 | orchestrator | changed: [localhost] 2026-03-27 01:25:11.628045 | orchestrator | 2026-03-27 01:25:11.628049 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-27 01:25:11.628053 | orchestrator | 2026-03-27 01:25:11.628057 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-27 01:25:11.628063 | orchestrator | Friday 27 March 2026 01:24:03 +0000 (0:00:10.094) 0:01:19.948 ********** 2026-03-27 01:25:11.628070 | orchestrator | ok: [localhost] 2026-03-27 01:25:11.628076 | orchestrator | 2026-03-27 01:25:11.628081 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-27 01:25:11.628087 | orchestrator | Friday 27 March 2026 01:24:06 +0000 (0:00:03.714) 0:01:23.662 ********** 2026-03-27 01:25:11.628093 | orchestrator | skipping: [localhost] 2026-03-27 01:25:11.628099 | orchestrator | 2026-03-27 01:25:11.628105 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-27 01:25:11.628112 | orchestrator | Friday 27 March 2026 01:24:06 +0000 (0:00:00.056) 0:01:23.719 ********** 2026-03-27 01:25:11.628118 | orchestrator | skipping: [localhost] 2026-03-27 01:25:11.628171 | orchestrator | 2026-03-27 01:25:11.628178 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-27 01:25:11.628183 | orchestrator | Friday 27 March 2026 01:24:06 +0000 (0:00:00.089) 0:01:23.809 ********** 2026-03-27 01:25:11.628187 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-27 01:25:11.628192 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-27 01:25:11.628210 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-27 01:25:11.628215 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-27 01:25:11.628220 | orchestrator | skipping: [localhost] => (item=test)  2026-03-27 01:25:11.628227 | orchestrator | skipping: [localhost] 2026-03-27 01:25:11.628233 | orchestrator | 2026-03-27 01:25:11.628260 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-27 01:25:11.628277 | orchestrator | Friday 27 March 2026 01:24:07 +0000 (0:00:00.170) 0:01:23.979 ********** 2026-03-27 01:25:11.628283 | orchestrator | skipping: [localhost] 2026-03-27 01:25:11.628289 | orchestrator | 2026-03-27 01:25:11.628296 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-27 01:25:11.628302 | orchestrator | Friday 27 March 2026 01:24:07 +0000 (0:00:00.149) 0:01:24.129 ********** 2026-03-27 01:25:11.628308 | orchestrator | changed: [localhost] => (item=test) 2026-03-27 01:25:11.628314 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-27 01:25:11.628320 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-27 01:25:11.628327 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-27 01:25:11.628334 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-27 01:25:11.628339 | orchestrator | 2026-03-27 01:25:11.628344 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-27 01:25:11.628348 | orchestrator | Friday 27 March 2026 01:24:12 +0000 (0:00:04.767) 0:01:28.897 ********** 2026-03-27 01:25:11.628353 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-27 01:25:11.628359 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-27 01:25:11.628363 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-27 01:25:11.628367 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-27 01:25:11.628373 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j15785742616.2776', 'results_file': '/ansible/.ansible_async/j15785742616.2776', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:11.628386 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j840187822707.2801', 'results_file': '/ansible/.ansible_async/j840187822707.2801', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:11.628394 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j393648011531.2826', 'results_file': '/ansible/.ansible_async/j393648011531.2826', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:11.628402 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j583036197302.2851', 'results_file': '/ansible/.ansible_async/j583036197302.2851', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:11.628409 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j327855301373.2876', 'results_file': '/ansible/.ansible_async/j327855301373.2876', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:11.628415 | orchestrator | 2026-03-27 01:25:11.628421 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-27 01:25:11.628427 | orchestrator | Friday 27 March 2026 01:24:58 +0000 (0:00:46.803) 0:02:15.701 ********** 2026-03-27 01:25:11.628433 | orchestrator | changed: [localhost] => (item=test) 2026-03-27 01:25:11.628439 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-27 01:25:11.628445 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-27 01:25:11.628451 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-27 01:25:11.628456 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-27 01:25:11.628463 | orchestrator | 2026-03-27 01:25:11.628469 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-27 01:25:11.628474 | orchestrator | Friday 27 March 2026 01:25:02 +0000 (0:00:04.077) 0:02:19.779 ********** 2026-03-27 01:25:11.628481 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-27 01:25:11.628487 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j473147138249.2973', 'results_file': '/ansible/.ansible_async/j473147138249.2973', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:11.628499 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j297006730709.2998', 'results_file': '/ansible/.ansible_async/j297006730709.2998', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:11.628505 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j182768917958.3030', 'results_file': '/ansible/.ansible_async/j182768917958.3030', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:11.628520 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j728306381943.3055', 'results_file': '/ansible/.ansible_async/j728306381943.3055', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:51.868264 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j91780491335.3080', 'results_file': '/ansible/.ansible_async/j91780491335.3080', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:51.868364 | orchestrator | 2026-03-27 01:25:51.868377 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-27 01:25:51.868386 | orchestrator | Friday 27 March 2026 01:25:12 +0000 (0:00:09.397) 0:02:29.177 ********** 2026-03-27 01:25:51.868394 | orchestrator | changed: [localhost] => (item=test) 2026-03-27 01:25:51.868403 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-27 01:25:51.868411 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-27 01:25:51.868418 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-27 01:25:51.868425 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-27 01:25:51.868432 | orchestrator | 2026-03-27 01:25:51.868439 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-27 01:25:51.868446 | orchestrator | Friday 27 March 2026 01:25:16 +0000 (0:00:04.071) 0:02:33.249 ********** 2026-03-27 01:25:51.868453 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-27 01:25:51.868461 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j466590732732.3149', 'results_file': '/ansible/.ansible_async/j466590732732.3149', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:51.868468 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j89400678976.3174', 'results_file': '/ansible/.ansible_async/j89400678976.3174', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:51.868491 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j106058155740.3200', 'results_file': '/ansible/.ansible_async/j106058155740.3200', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:51.868498 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j567928152248.3226', 'results_file': '/ansible/.ansible_async/j567928152248.3226', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:51.868505 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j179466729746.3252', 'results_file': '/ansible/.ansible_async/j179466729746.3252', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-27 01:25:51.868512 | orchestrator | 2026-03-27 01:25:51.868519 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-27 01:25:51.868527 | orchestrator | Friday 27 March 2026 01:25:25 +0000 (0:00:09.481) 0:02:42.731 ********** 2026-03-27 01:25:51.868534 | orchestrator | changed: [localhost] 2026-03-27 01:25:51.868541 | orchestrator | 2026-03-27 01:25:51.868547 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-27 01:25:51.868574 | orchestrator | Friday 27 March 2026 01:25:32 +0000 (0:00:07.039) 0:02:49.770 ********** 2026-03-27 01:25:51.868581 | orchestrator | changed: [localhost] 2026-03-27 01:25:51.868587 | orchestrator | 2026-03-27 01:25:51.868593 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-27 01:25:51.868599 | orchestrator | Friday 27 March 2026 01:25:46 +0000 (0:00:13.357) 0:03:03.128 ********** 2026-03-27 01:25:51.868605 | orchestrator | ok: [localhost] 2026-03-27 01:25:51.868611 | orchestrator | 2026-03-27 01:25:51.868617 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-27 01:25:51.868623 | orchestrator | Friday 27 March 2026 01:25:51 +0000 (0:00:05.364) 0:03:08.492 ********** 2026-03-27 01:25:51.868630 | orchestrator | ok: [localhost] => { 2026-03-27 01:25:51.868637 | orchestrator |  "msg": "192.168.112.187" 2026-03-27 01:25:51.868644 | orchestrator | } 2026-03-27 01:25:51.868651 | orchestrator | 2026-03-27 01:25:51.868657 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:25:51.868665 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-27 01:25:51.868673 | orchestrator | 2026-03-27 01:25:51.868680 | orchestrator | 2026-03-27 01:25:51.868686 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:25:51.868693 | orchestrator | Friday 27 March 2026 01:25:51 +0000 (0:00:00.046) 0:03:08.539 ********** 2026-03-27 01:25:51.868700 | orchestrator | =============================================================================== 2026-03-27 01:25:51.868707 | orchestrator | Wait for instance creation to complete --------------------------------- 46.80s 2026-03-27 01:25:51.868714 | orchestrator | Attach test volume ----------------------------------------------------- 13.36s 2026-03-27 01:25:51.868720 | orchestrator | Add member roles to user test ------------------------------------------ 11.45s 2026-03-27 01:25:51.868727 | orchestrator | Create test router ----------------------------------------------------- 10.09s 2026-03-27 01:25:51.868733 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.48s 2026-03-27 01:25:51.868740 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.40s 2026-03-27 01:25:51.868747 | orchestrator | Create test volume ------------------------------------------------------ 7.04s 2026-03-27 01:25:51.868769 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.63s 2026-03-27 01:25:51.868777 | orchestrator | Create floating ip address ---------------------------------------------- 5.36s 2026-03-27 01:25:51.868784 | orchestrator | Create test subnet ------------------------------------------------------ 5.14s 2026-03-27 01:25:51.868790 | orchestrator | Create test instances --------------------------------------------------- 4.77s 2026-03-27 01:25:51.868797 | orchestrator | Create test network ----------------------------------------------------- 4.57s 2026-03-27 01:25:51.868803 | orchestrator | Create ssh security group ----------------------------------------------- 4.55s 2026-03-27 01:25:51.868809 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.36s 2026-03-27 01:25:51.868816 | orchestrator | Create test-admin user -------------------------------------------------- 4.31s 2026-03-27 01:25:51.868823 | orchestrator | Create test server group ------------------------------------------------ 4.27s 2026-03-27 01:25:51.868831 | orchestrator | Create test user -------------------------------------------------------- 4.26s 2026-03-27 01:25:51.868838 | orchestrator | Create test project ----------------------------------------------------- 4.22s 2026-03-27 01:25:51.868846 | orchestrator | Create icmp security group ---------------------------------------------- 4.18s 2026-03-27 01:25:51.868853 | orchestrator | Add metadata to instances ----------------------------------------------- 4.08s 2026-03-27 01:25:51.995693 | orchestrator | + server_list 2026-03-27 01:25:51.995779 | orchestrator | + openstack --os-cloud test server list 2026-03-27 01:25:55.265588 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-27 01:25:55.265689 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-27 01:25:55.265696 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-27 01:25:55.265711 | orchestrator | | 603639ea-0672-4b4a-9398-906fc860df0c | test-4 | ACTIVE | test=192.168.112.148, 192.168.200.131 | N/A (booted from volume) | SCS-1L-1 | 2026-03-27 01:25:55.265715 | orchestrator | | cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d | test-3 | ACTIVE | test=192.168.112.134, 192.168.200.61 | N/A (booted from volume) | SCS-1L-1 | 2026-03-27 01:25:55.265719 | orchestrator | | 1d8278fb-a580-4a9a-9140-3ac30647656d | test-2 | ACTIVE | test=192.168.112.156, 192.168.200.192 | N/A (booted from volume) | SCS-1L-1 | 2026-03-27 01:25:55.265723 | orchestrator | | 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 | test-1 | ACTIVE | test=192.168.112.135, 192.168.200.71 | N/A (booted from volume) | SCS-1L-1 | 2026-03-27 01:25:55.265727 | orchestrator | | f19fbfb6-0649-428b-abab-bafb64f00dbb | test | ACTIVE | test=192.168.112.187, 192.168.200.80 | N/A (booted from volume) | SCS-1L-1 | 2026-03-27 01:25:55.265731 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-27 01:25:55.495588 | orchestrator | + openstack --os-cloud test server show test 2026-03-27 01:25:58.623172 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:25:58.623280 | orchestrator | | Field | Value | 2026-03-27 01:25:58.623291 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:25:58.623299 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-27 01:25:58.623306 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-27 01:25:58.623312 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-27 01:25:58.623336 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-27 01:25:58.623343 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-27 01:25:58.623353 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-27 01:25:58.623375 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-27 01:25:58.623382 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-27 01:25:58.623389 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-27 01:25:58.623396 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-27 01:25:58.623403 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-27 01:25:58.623409 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-27 01:25:58.623416 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-27 01:25:58.623428 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-27 01:25:58.623435 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-27 01:25:58.623441 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-27T01:24:40.000000 | 2026-03-27 01:25:58.623453 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-27 01:25:58.623461 | orchestrator | | accessIPv4 | | 2026-03-27 01:25:58.623467 | orchestrator | | accessIPv6 | | 2026-03-27 01:25:58.623474 | orchestrator | | addresses | test=192.168.112.187, 192.168.200.80 | 2026-03-27 01:25:58.623481 | orchestrator | | config_drive | | 2026-03-27 01:25:58.623492 | orchestrator | | created | 2026-03-27T01:24:16Z | 2026-03-27 01:25:58.623507 | orchestrator | | description | None | 2026-03-27 01:25:58.623514 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-27 01:25:58.623522 | orchestrator | | hostId | 01699903420b7ff1c876537deb5bb4354183a2128cc04476e8508b02 | 2026-03-27 01:25:58.623530 | orchestrator | | host_status | None | 2026-03-27 01:25:58.623540 | orchestrator | | id | f19fbfb6-0649-428b-abab-bafb64f00dbb | 2026-03-27 01:25:58.623546 | orchestrator | | image | N/A (booted from volume) | 2026-03-27 01:25:58.623552 | orchestrator | | key_name | test | 2026-03-27 01:25:58.623558 | orchestrator | | locked | False | 2026-03-27 01:25:58.623564 | orchestrator | | locked_reason | None | 2026-03-27 01:25:58.623575 | orchestrator | | name | test | 2026-03-27 01:25:58.623581 | orchestrator | | pinned_availability_zone | None | 2026-03-27 01:25:58.623587 | orchestrator | | progress | 0 | 2026-03-27 01:25:58.623597 | orchestrator | | project_id | 416d5a0928f04516bec3165b9006a6fe | 2026-03-27 01:25:58.623605 | orchestrator | | properties | hostname='test' | 2026-03-27 01:25:58.623618 | orchestrator | | security_groups | name='ssh' | 2026-03-27 01:25:58.623625 | orchestrator | | | name='icmp' | 2026-03-27 01:25:58.623632 | orchestrator | | server_groups | None | 2026-03-27 01:25:58.623638 | orchestrator | | status | ACTIVE | 2026-03-27 01:25:58.623651 | orchestrator | | tags | test | 2026-03-27 01:25:58.623658 | orchestrator | | trusted_image_certificates | None | 2026-03-27 01:25:58.623666 | orchestrator | | updated | 2026-03-27T01:25:03Z | 2026-03-27 01:25:58.623673 | orchestrator | | user_id | 5623d559f3164243ae0033e5aac02454 | 2026-03-27 01:25:58.623682 | orchestrator | | volumes_attached | delete_on_termination='True', id='07091a8d-8b8b-4411-bd13-f1cce89bf750' | 2026-03-27 01:25:58.623690 | orchestrator | | | delete_on_termination='False', id='76cdeacc-e7e9-4b1f-98fd-78f705f8a372' | 2026-03-27 01:25:58.627458 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:25:58.893627 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-27 01:26:01.842426 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:01.842480 | orchestrator | | Field | Value | 2026-03-27 01:26:01.842501 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:01.842506 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-27 01:26:01.842510 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-27 01:26:01.842514 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-27 01:26:01.842518 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-27 01:26:01.842529 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-27 01:26:01.842533 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-27 01:26:01.842545 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-27 01:26:01.842549 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-27 01:26:01.842553 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-27 01:26:01.842560 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-27 01:26:01.842564 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-27 01:26:01.842568 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-27 01:26:01.842571 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-27 01:26:01.842575 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-27 01:26:01.842581 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-27 01:26:01.842585 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-27T01:24:40.000000 | 2026-03-27 01:26:01.842592 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-27 01:26:01.842596 | orchestrator | | accessIPv4 | | 2026-03-27 01:26:01.842603 | orchestrator | | accessIPv6 | | 2026-03-27 01:26:01.842609 | orchestrator | | addresses | test=192.168.112.135, 192.168.200.71 | 2026-03-27 01:26:01.842616 | orchestrator | | config_drive | | 2026-03-27 01:26:01.842622 | orchestrator | | created | 2026-03-27T01:24:16Z | 2026-03-27 01:26:01.842631 | orchestrator | | description | None | 2026-03-27 01:26:01.842639 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-27 01:26:01.842646 | orchestrator | | hostId | 01699903420b7ff1c876537deb5bb4354183a2128cc04476e8508b02 | 2026-03-27 01:26:01.842652 | orchestrator | | host_status | None | 2026-03-27 01:26:01.842663 | orchestrator | | id | 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 | 2026-03-27 01:26:01.842692 | orchestrator | | image | N/A (booted from volume) | 2026-03-27 01:26:01.842704 | orchestrator | | key_name | test | 2026-03-27 01:26:01.842711 | orchestrator | | locked | False | 2026-03-27 01:26:01.842717 | orchestrator | | locked_reason | None | 2026-03-27 01:26:01.842724 | orchestrator | | name | test-1 | 2026-03-27 01:26:01.842731 | orchestrator | | pinned_availability_zone | None | 2026-03-27 01:26:01.842738 | orchestrator | | progress | 0 | 2026-03-27 01:26:01.842749 | orchestrator | | project_id | 416d5a0928f04516bec3165b9006a6fe | 2026-03-27 01:26:01.842753 | orchestrator | | properties | hostname='test-1' | 2026-03-27 01:26:01.842765 | orchestrator | | security_groups | name='ssh' | 2026-03-27 01:26:01.842769 | orchestrator | | | name='icmp' | 2026-03-27 01:26:01.842773 | orchestrator | | server_groups | None | 2026-03-27 01:26:01.842777 | orchestrator | | status | ACTIVE | 2026-03-27 01:26:01.842781 | orchestrator | | tags | test | 2026-03-27 01:26:01.842784 | orchestrator | | trusted_image_certificates | None | 2026-03-27 01:26:01.842788 | orchestrator | | updated | 2026-03-27T01:25:04Z | 2026-03-27 01:26:01.842794 | orchestrator | | user_id | 5623d559f3164243ae0033e5aac02454 | 2026-03-27 01:26:01.842798 | orchestrator | | volumes_attached | delete_on_termination='True', id='d67f4f5a-b802-45a2-9fe7-81b5d501b278' | 2026-03-27 01:26:01.846391 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:02.105443 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-27 01:26:04.968567 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:04.968628 | orchestrator | | Field | Value | 2026-03-27 01:26:04.968638 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:04.968645 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-27 01:26:04.968652 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-27 01:26:04.968659 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-27 01:26:04.968665 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-27 01:26:04.968677 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-27 01:26:04.968684 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-27 01:26:04.968713 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-27 01:26:04.968721 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-27 01:26:04.968728 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-27 01:26:04.968735 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-27 01:26:04.968742 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-27 01:26:04.968749 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-27 01:26:04.968755 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-27 01:26:04.968762 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-27 01:26:04.968786 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-27 01:26:04.968797 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-27T01:24:40.000000 | 2026-03-27 01:26:04.968808 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-27 01:26:04.968815 | orchestrator | | accessIPv4 | | 2026-03-27 01:26:04.968822 | orchestrator | | accessIPv6 | | 2026-03-27 01:26:04.968828 | orchestrator | | addresses | test=192.168.112.156, 192.168.200.192 | 2026-03-27 01:26:04.968835 | orchestrator | | config_drive | | 2026-03-27 01:26:04.968842 | orchestrator | | created | 2026-03-27T01:24:16Z | 2026-03-27 01:26:04.968849 | orchestrator | | description | None | 2026-03-27 01:26:04.968856 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-27 01:26:04.968868 | orchestrator | | hostId | 01699903420b7ff1c876537deb5bb4354183a2128cc04476e8508b02 | 2026-03-27 01:26:04.968875 | orchestrator | | host_status | None | 2026-03-27 01:26:04.968886 | orchestrator | | id | 1d8278fb-a580-4a9a-9140-3ac30647656d | 2026-03-27 01:26:04.968892 | orchestrator | | image | N/A (booted from volume) | 2026-03-27 01:26:04.968899 | orchestrator | | key_name | test | 2026-03-27 01:26:04.968905 | orchestrator | | locked | False | 2026-03-27 01:26:04.968911 | orchestrator | | locked_reason | None | 2026-03-27 01:26:04.968918 | orchestrator | | name | test-2 | 2026-03-27 01:26:04.968924 | orchestrator | | pinned_availability_zone | None | 2026-03-27 01:26:04.968930 | orchestrator | | progress | 0 | 2026-03-27 01:26:04.968939 | orchestrator | | project_id | 416d5a0928f04516bec3165b9006a6fe | 2026-03-27 01:26:04.968945 | orchestrator | | properties | hostname='test-2' | 2026-03-27 01:26:04.968955 | orchestrator | | security_groups | name='ssh' | 2026-03-27 01:26:04.968961 | orchestrator | | | name='icmp' | 2026-03-27 01:26:04.968967 | orchestrator | | server_groups | None | 2026-03-27 01:26:04.968973 | orchestrator | | status | ACTIVE | 2026-03-27 01:26:04.968979 | orchestrator | | tags | test | 2026-03-27 01:26:04.969022 | orchestrator | | trusted_image_certificates | None | 2026-03-27 01:26:04.969031 | orchestrator | | updated | 2026-03-27T01:25:05Z | 2026-03-27 01:26:04.969041 | orchestrator | | user_id | 5623d559f3164243ae0033e5aac02454 | 2026-03-27 01:26:04.969050 | orchestrator | | volumes_attached | delete_on_termination='True', id='b52e9c63-79fc-41f3-8447-57e1603c8150' | 2026-03-27 01:26:04.973350 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:05.232110 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-27 01:26:07.924653 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:07.924708 | orchestrator | | Field | Value | 2026-03-27 01:26:07.924716 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:07.924723 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-27 01:26:07.924729 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-27 01:26:07.924737 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-27 01:26:07.924758 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-27 01:26:07.924766 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-27 01:26:07.924781 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-27 01:26:07.924799 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-27 01:26:07.924806 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-27 01:26:07.924813 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-27 01:26:07.924820 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-27 01:26:07.924827 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-27 01:26:07.924834 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-27 01:26:07.924845 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-27 01:26:07.924852 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-27 01:26:07.924861 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-27 01:26:07.924868 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-27T01:24:43.000000 | 2026-03-27 01:26:07.924879 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-27 01:26:07.924886 | orchestrator | | accessIPv4 | | 2026-03-27 01:26:07.924893 | orchestrator | | accessIPv6 | | 2026-03-27 01:26:07.924898 | orchestrator | | addresses | test=192.168.112.134, 192.168.200.61 | 2026-03-27 01:26:07.924902 | orchestrator | | config_drive | | 2026-03-27 01:26:07.924909 | orchestrator | | created | 2026-03-27T01:24:17Z | 2026-03-27 01:26:07.924913 | orchestrator | | description | None | 2026-03-27 01:26:07.924916 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-27 01:26:07.924923 | orchestrator | | hostId | 01699903420b7ff1c876537deb5bb4354183a2128cc04476e8508b02 | 2026-03-27 01:26:07.924926 | orchestrator | | host_status | None | 2026-03-27 01:26:07.924933 | orchestrator | | id | cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d | 2026-03-27 01:26:07.924937 | orchestrator | | image | N/A (booted from volume) | 2026-03-27 01:26:07.924941 | orchestrator | | key_name | test | 2026-03-27 01:26:07.924945 | orchestrator | | locked | False | 2026-03-27 01:26:07.924949 | orchestrator | | locked_reason | None | 2026-03-27 01:26:07.924956 | orchestrator | | name | test-3 | 2026-03-27 01:26:07.924960 | orchestrator | | pinned_availability_zone | None | 2026-03-27 01:26:07.924964 | orchestrator | | progress | 0 | 2026-03-27 01:26:07.924970 | orchestrator | | project_id | 416d5a0928f04516bec3165b9006a6fe | 2026-03-27 01:26:07.924974 | orchestrator | | properties | hostname='test-3' | 2026-03-27 01:26:07.924981 | orchestrator | | security_groups | name='ssh' | 2026-03-27 01:26:07.924985 | orchestrator | | | name='icmp' | 2026-03-27 01:26:07.924989 | orchestrator | | server_groups | None | 2026-03-27 01:26:07.924993 | orchestrator | | status | ACTIVE | 2026-03-27 01:26:07.925036 | orchestrator | | tags | test | 2026-03-27 01:26:07.925042 | orchestrator | | trusted_image_certificates | None | 2026-03-27 01:26:07.925046 | orchestrator | | updated | 2026-03-27T01:25:05Z | 2026-03-27 01:26:07.925050 | orchestrator | | user_id | 5623d559f3164243ae0033e5aac02454 | 2026-03-27 01:26:07.925056 | orchestrator | | volumes_attached | delete_on_termination='True', id='a7fd2243-3db9-48df-84bb-d7d1044e07cc' | 2026-03-27 01:26:07.927720 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:08.095355 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-27 01:26:10.709375 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:10.709437 | orchestrator | | Field | Value | 2026-03-27 01:26:10.709448 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:10.709470 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-27 01:26:10.709477 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-27 01:26:10.709484 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-27 01:26:10.709491 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-27 01:26:10.709497 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-27 01:26:10.709505 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-27 01:26:10.709522 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-27 01:26:10.709530 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-27 01:26:10.709538 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-27 01:26:10.709551 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-27 01:26:10.709560 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-27 01:26:10.709568 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-27 01:26:10.709575 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-27 01:26:10.709582 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-27 01:26:10.709816 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-27 01:26:10.709836 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-27T01:24:42.000000 | 2026-03-27 01:26:10.709852 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-27 01:26:10.709860 | orchestrator | | accessIPv4 | | 2026-03-27 01:26:10.709874 | orchestrator | | accessIPv6 | | 2026-03-27 01:26:10.709882 | orchestrator | | addresses | test=192.168.112.148, 192.168.200.131 | 2026-03-27 01:26:10.709890 | orchestrator | | config_drive | | 2026-03-27 01:26:10.709897 | orchestrator | | created | 2026-03-27T01:24:18Z | 2026-03-27 01:26:10.709908 | orchestrator | | description | None | 2026-03-27 01:26:10.709915 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-27 01:26:10.709923 | orchestrator | | hostId | 190fca7711e51f728e5d5079314d05d76c2e035d1d4bcabacf5f2366 | 2026-03-27 01:26:10.709929 | orchestrator | | host_status | None | 2026-03-27 01:26:10.709937 | orchestrator | | id | 603639ea-0672-4b4a-9398-906fc860df0c | 2026-03-27 01:26:10.709941 | orchestrator | | image | N/A (booted from volume) | 2026-03-27 01:26:10.709949 | orchestrator | | key_name | test | 2026-03-27 01:26:10.709953 | orchestrator | | locked | False | 2026-03-27 01:26:10.709957 | orchestrator | | locked_reason | None | 2026-03-27 01:26:10.709961 | orchestrator | | name | test-4 | 2026-03-27 01:26:10.709967 | orchestrator | | pinned_availability_zone | None | 2026-03-27 01:26:10.709971 | orchestrator | | progress | 0 | 2026-03-27 01:26:10.709975 | orchestrator | | project_id | 416d5a0928f04516bec3165b9006a6fe | 2026-03-27 01:26:10.709979 | orchestrator | | properties | hostname='test-4' | 2026-03-27 01:26:10.709987 | orchestrator | | security_groups | name='ssh' | 2026-03-27 01:26:10.709994 | orchestrator | | | name='icmp' | 2026-03-27 01:26:10.709998 | orchestrator | | server_groups | None | 2026-03-27 01:26:10.710078 | orchestrator | | status | ACTIVE | 2026-03-27 01:26:10.710086 | orchestrator | | tags | test | 2026-03-27 01:26:10.710091 | orchestrator | | trusted_image_certificates | None | 2026-03-27 01:26:10.710097 | orchestrator | | updated | 2026-03-27T01:25:06Z | 2026-03-27 01:26:10.710101 | orchestrator | | user_id | 5623d559f3164243ae0033e5aac02454 | 2026-03-27 01:26:10.710105 | orchestrator | | volumes_attached | delete_on_termination='True', id='cd4cf0a2-7eb1-4ddc-a25c-dd87cc1dc55d' | 2026-03-27 01:26:10.713521 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-27 01:26:10.868345 | orchestrator | + server_ping 2026-03-27 01:26:10.870309 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-27 01:26:10.871332 | orchestrator | ++ tr -d '\r' 2026-03-27 01:26:13.201473 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:26:13.201545 | orchestrator | + ping -c3 192.168.112.148 2026-03-27 01:26:13.214648 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2026-03-27 01:26:13.214741 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=5.79 ms 2026-03-27 01:26:14.212760 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=1.97 ms 2026-03-27 01:26:15.213748 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.93 ms 2026-03-27 01:26:15.213839 | orchestrator | 2026-03-27 01:26:15.213848 | orchestrator | --- 192.168.112.148 ping statistics --- 2026-03-27 01:26:15.213856 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:26:15.213863 | orchestrator | rtt min/avg/max/mdev = 1.931/3.228/5.789/1.810 ms 2026-03-27 01:26:15.214482 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:26:15.214547 | orchestrator | + ping -c3 192.168.112.135 2026-03-27 01:26:15.223948 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2026-03-27 01:26:15.224132 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=5.82 ms 2026-03-27 01:26:16.221919 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=2.56 ms 2026-03-27 01:26:17.223228 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=2.01 ms 2026-03-27 01:26:17.223566 | orchestrator | 2026-03-27 01:26:17.223592 | orchestrator | --- 192.168.112.135 ping statistics --- 2026-03-27 01:26:17.223601 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:26:17.223608 | orchestrator | rtt min/avg/max/mdev = 2.012/3.462/5.820/1.681 ms 2026-03-27 01:26:17.224505 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:26:17.224542 | orchestrator | + ping -c3 192.168.112.156 2026-03-27 01:26:17.239073 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-03-27 01:26:17.239164 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=10.7 ms 2026-03-27 01:26:18.231637 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=1.41 ms 2026-03-27 01:26:19.233555 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.45 ms 2026-03-27 01:26:19.234101 | orchestrator | 2026-03-27 01:26:19.234137 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-03-27 01:26:19.234143 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-27 01:26:19.234148 | orchestrator | rtt min/avg/max/mdev = 1.410/4.510/10.673/4.357 ms 2026-03-27 01:26:19.234340 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:26:19.234361 | orchestrator | + ping -c3 192.168.112.134 2026-03-27 01:26:19.244017 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-27 01:26:19.244112 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=5.21 ms 2026-03-27 01:26:20.241820 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=1.59 ms 2026-03-27 01:26:21.242872 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.15 ms 2026-03-27 01:26:21.242932 | orchestrator | 2026-03-27 01:26:21.242939 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-27 01:26:21.242944 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-27 01:26:21.242948 | orchestrator | rtt min/avg/max/mdev = 1.145/2.647/5.206/1.818 ms 2026-03-27 01:26:21.242955 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:26:21.242963 | orchestrator | + ping -c3 192.168.112.187 2026-03-27 01:26:21.249886 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-03-27 01:26:21.249956 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=3.50 ms 2026-03-27 01:26:22.249427 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=1.27 ms 2026-03-27 01:26:23.251338 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.60 ms 2026-03-27 01:26:23.251417 | orchestrator | 2026-03-27 01:26:23.251425 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-03-27 01:26:23.251431 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:26:23.251436 | orchestrator | rtt min/avg/max/mdev = 1.272/2.124/3.499/0.981 ms 2026-03-27 01:26:23.252608 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-27 01:26:23.252667 | orchestrator | + compute_list 2026-03-27 01:26:23.252678 | orchestrator | + osism manage compute list testbed-node-3 2026-03-27 01:26:24.843745 | orchestrator | 2026-03-27 01:26:24 | ERROR  | Unable to get ansible vault password 2026-03-27 01:26:24.843820 | orchestrator | 2026-03-27 01:26:24 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:26:24.843828 | orchestrator | 2026-03-27 01:26:24 | ERROR  | Dropping encrypted entries 2026-03-27 01:26:27.701621 | orchestrator | +------+--------+----------+ 2026-03-27 01:26:27.701686 | orchestrator | | ID | Name | Status | 2026-03-27 01:26:27.701697 | orchestrator | |------+--------+----------| 2026-03-27 01:26:27.701704 | orchestrator | +------+--------+----------+ 2026-03-27 01:26:27.902621 | orchestrator | + osism manage compute list testbed-node-4 2026-03-27 01:26:29.315745 | orchestrator | 2026-03-27 01:26:29 | ERROR  | Unable to get ansible vault password 2026-03-27 01:26:29.315817 | orchestrator | 2026-03-27 01:26:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:26:29.315825 | orchestrator | 2026-03-27 01:26:29 | ERROR  | Dropping encrypted entries 2026-03-27 01:26:31.015947 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:26:31.016040 | orchestrator | | ID | Name | Status | 2026-03-27 01:26:31.016197 | orchestrator | |--------------------------------------+--------+----------| 2026-03-27 01:26:31.016212 | orchestrator | | 603639ea-0672-4b4a-9398-906fc860df0c | test-4 | ACTIVE | 2026-03-27 01:26:31.016218 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:26:31.237752 | orchestrator | + osism manage compute list testbed-node-5 2026-03-27 01:26:32.612080 | orchestrator | 2026-03-27 01:26:32 | ERROR  | Unable to get ansible vault password 2026-03-27 01:26:32.612150 | orchestrator | 2026-03-27 01:26:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:26:32.612158 | orchestrator | 2026-03-27 01:26:32 | ERROR  | Dropping encrypted entries 2026-03-27 01:26:34.152592 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:26:34.152658 | orchestrator | | ID | Name | Status | 2026-03-27 01:26:34.152664 | orchestrator | |--------------------------------------+--------+----------| 2026-03-27 01:26:34.152668 | orchestrator | | cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d | test-3 | ACTIVE | 2026-03-27 01:26:34.152672 | orchestrator | | 1d8278fb-a580-4a9a-9140-3ac30647656d | test-2 | ACTIVE | 2026-03-27 01:26:34.152676 | orchestrator | | 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 | test-1 | ACTIVE | 2026-03-27 01:26:34.152681 | orchestrator | | f19fbfb6-0649-428b-abab-bafb64f00dbb | test | ACTIVE | 2026-03-27 01:26:34.152685 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:26:34.369841 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-03-27 01:26:35.782981 | orchestrator | 2026-03-27 01:26:35 | ERROR  | Unable to get ansible vault password 2026-03-27 01:26:35.783035 | orchestrator | 2026-03-27 01:26:35 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:26:35.783042 | orchestrator | 2026-03-27 01:26:35 | ERROR  | Dropping encrypted entries 2026-03-27 01:26:37.025174 | orchestrator | 2026-03-27 01:26:37 | INFO  | Live migrating server 603639ea-0672-4b4a-9398-906fc860df0c 2026-03-27 01:26:50.731896 | orchestrator | 2026-03-27 01:26:50 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:26:53.145035 | orchestrator | 2026-03-27 01:26:53 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:26:55.443473 | orchestrator | 2026-03-27 01:26:55 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:26:57.739950 | orchestrator | 2026-03-27 01:26:57 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:27:00.238063 | orchestrator | 2026-03-27 01:27:00 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:27:02.675845 | orchestrator | 2026-03-27 01:27:02 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:27:04.915239 | orchestrator | 2026-03-27 01:27:04 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:27:07.188918 | orchestrator | 2026-03-27 01:27:07 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:27:09.501794 | orchestrator | 2026-03-27 01:27:09 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:27:11.865080 | orchestrator | 2026-03-27 01:27:11 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:27:14.142689 | orchestrator | 2026-03-27 01:27:14 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) completed with status ACTIVE 2026-03-27 01:27:14.483076 | orchestrator | + compute_list 2026-03-27 01:27:14.483127 | orchestrator | + osism manage compute list testbed-node-3 2026-03-27 01:27:16.168249 | orchestrator | 2026-03-27 01:27:16 | ERROR  | Unable to get ansible vault password 2026-03-27 01:27:16.168306 | orchestrator | 2026-03-27 01:27:16 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:27:16.168312 | orchestrator | 2026-03-27 01:27:16 | ERROR  | Dropping encrypted entries 2026-03-27 01:27:17.361955 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:27:17.362044 | orchestrator | | ID | Name | Status | 2026-03-27 01:27:17.362053 | orchestrator | |--------------------------------------+--------+----------| 2026-03-27 01:27:17.362058 | orchestrator | | 603639ea-0672-4b4a-9398-906fc860df0c | test-4 | ACTIVE | 2026-03-27 01:27:17.362063 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:27:17.672359 | orchestrator | + osism manage compute list testbed-node-4 2026-03-27 01:27:19.214677 | orchestrator | 2026-03-27 01:27:19 | ERROR  | Unable to get ansible vault password 2026-03-27 01:27:19.214744 | orchestrator | 2026-03-27 01:27:19 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:27:19.214754 | orchestrator | 2026-03-27 01:27:19 | ERROR  | Dropping encrypted entries 2026-03-27 01:27:20.132364 | orchestrator | +------+--------+----------+ 2026-03-27 01:27:20.132429 | orchestrator | | ID | Name | Status | 2026-03-27 01:27:20.132435 | orchestrator | |------+--------+----------| 2026-03-27 01:27:20.132438 | orchestrator | +------+--------+----------+ 2026-03-27 01:27:20.430865 | orchestrator | + osism manage compute list testbed-node-5 2026-03-27 01:27:22.000986 | orchestrator | 2026-03-27 01:27:22 | ERROR  | Unable to get ansible vault password 2026-03-27 01:27:22.001075 | orchestrator | 2026-03-27 01:27:22 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:27:22.001089 | orchestrator | 2026-03-27 01:27:22 | ERROR  | Dropping encrypted entries 2026-03-27 01:27:23.607876 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:27:23.607952 | orchestrator | | ID | Name | Status | 2026-03-27 01:27:23.607958 | orchestrator | |--------------------------------------+--------+----------| 2026-03-27 01:27:23.607962 | orchestrator | | cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d | test-3 | ACTIVE | 2026-03-27 01:27:23.607966 | orchestrator | | 1d8278fb-a580-4a9a-9140-3ac30647656d | test-2 | ACTIVE | 2026-03-27 01:27:23.607971 | orchestrator | | 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 | test-1 | ACTIVE | 2026-03-27 01:27:23.607975 | orchestrator | | f19fbfb6-0649-428b-abab-bafb64f00dbb | test | ACTIVE | 2026-03-27 01:27:23.607979 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:27:23.899145 | orchestrator | + server_ping 2026-03-27 01:27:23.899950 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-27 01:27:23.899998 | orchestrator | ++ tr -d '\r' 2026-03-27 01:27:26.755796 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:27:26.755901 | orchestrator | + ping -c3 192.168.112.148 2026-03-27 01:27:26.766528 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2026-03-27 01:27:26.766604 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=6.91 ms 2026-03-27 01:27:27.763023 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=1.96 ms 2026-03-27 01:27:28.764211 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.78 ms 2026-03-27 01:27:28.764304 | orchestrator | 2026-03-27 01:27:28.764314 | orchestrator | --- 192.168.112.148 ping statistics --- 2026-03-27 01:27:28.764323 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:27:28.764329 | orchestrator | rtt min/avg/max/mdev = 1.782/3.547/6.906/2.375 ms 2026-03-27 01:27:28.764337 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:27:28.764344 | orchestrator | + ping -c3 192.168.112.135 2026-03-27 01:27:28.773504 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2026-03-27 01:27:28.773578 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=6.05 ms 2026-03-27 01:27:29.769991 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=1.25 ms 2026-03-27 01:27:30.772029 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.19 ms 2026-03-27 01:27:30.772075 | orchestrator | 2026-03-27 01:27:30.772081 | orchestrator | --- 192.168.112.135 ping statistics --- 2026-03-27 01:27:30.772087 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:27:30.772093 | orchestrator | rtt min/avg/max/mdev = 1.189/2.830/6.050/2.277 ms 2026-03-27 01:27:30.772102 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:27:30.772125 | orchestrator | + ping -c3 192.168.112.156 2026-03-27 01:27:30.781760 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-03-27 01:27:30.781805 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=4.72 ms 2026-03-27 01:27:31.780498 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=1.54 ms 2026-03-27 01:27:32.783013 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.51 ms 2026-03-27 01:27:32.783070 | orchestrator | 2026-03-27 01:27:32.783079 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-03-27 01:27:32.783086 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-27 01:27:32.783092 | orchestrator | rtt min/avg/max/mdev = 1.507/2.589/4.716/1.504 ms 2026-03-27 01:27:32.783099 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:27:32.783105 | orchestrator | + ping -c3 192.168.112.134 2026-03-27 01:27:32.791594 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-27 01:27:32.791653 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=4.15 ms 2026-03-27 01:27:33.790953 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=1.99 ms 2026-03-27 01:27:34.792962 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.94 ms 2026-03-27 01:27:34.793041 | orchestrator | 2026-03-27 01:27:34.793047 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-27 01:27:34.793053 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:27:34.793057 | orchestrator | rtt min/avg/max/mdev = 1.943/2.695/4.148/1.027 ms 2026-03-27 01:27:34.793062 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:27:34.793067 | orchestrator | + ping -c3 192.168.112.187 2026-03-27 01:27:34.803037 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-03-27 01:27:34.803126 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=5.56 ms 2026-03-27 01:27:35.801841 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.37 ms 2026-03-27 01:27:36.803159 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.69 ms 2026-03-27 01:27:36.803283 | orchestrator | 2026-03-27 01:27:36.803291 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-03-27 01:27:36.803298 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:27:36.803303 | orchestrator | rtt min/avg/max/mdev = 1.692/3.208/5.561/1.686 ms 2026-03-27 01:27:36.803476 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-03-27 01:27:38.446908 | orchestrator | 2026-03-27 01:27:38 | ERROR  | Unable to get ansible vault password 2026-03-27 01:27:38.447056 | orchestrator | 2026-03-27 01:27:38 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:27:38.447069 | orchestrator | 2026-03-27 01:27:38 | ERROR  | Dropping encrypted entries 2026-03-27 01:27:40.164608 | orchestrator | 2026-03-27 01:27:40 | INFO  | Live migrating server cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d 2026-03-27 01:27:49.890369 | orchestrator | 2026-03-27 01:27:49 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:27:52.166640 | orchestrator | 2026-03-27 01:27:52 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:27:54.451648 | orchestrator | 2026-03-27 01:27:54 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:27:56.721530 | orchestrator | 2026-03-27 01:27:56 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:27:59.066910 | orchestrator | 2026-03-27 01:27:59 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:28:01.366304 | orchestrator | 2026-03-27 01:28:01 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:28:03.689195 | orchestrator | 2026-03-27 01:28:03 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:28:05.885779 | orchestrator | 2026-03-27 01:28:05 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:28:08.139881 | orchestrator | 2026-03-27 01:28:08 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) completed with status ACTIVE 2026-03-27 01:28:08.139933 | orchestrator | 2026-03-27 01:28:08 | INFO  | Live migrating server 1d8278fb-a580-4a9a-9140-3ac30647656d 2026-03-27 01:28:18.823726 | orchestrator | 2026-03-27 01:28:18 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:28:21.085897 | orchestrator | 2026-03-27 01:28:21 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:28:23.350078 | orchestrator | 2026-03-27 01:28:23 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:28:25.532640 | orchestrator | 2026-03-27 01:28:25 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:28:27.767914 | orchestrator | 2026-03-27 01:28:27 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:28:29.972960 | orchestrator | 2026-03-27 01:28:29 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:28:32.252657 | orchestrator | 2026-03-27 01:28:32 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:28:34.539445 | orchestrator | 2026-03-27 01:28:34 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:28:36.899415 | orchestrator | 2026-03-27 01:28:36 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) completed with status ACTIVE 2026-03-27 01:28:36.899494 | orchestrator | 2026-03-27 01:28:36 | INFO  | Live migrating server 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 2026-03-27 01:28:48.478193 | orchestrator | 2026-03-27 01:28:48 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:28:50.775864 | orchestrator | 2026-03-27 01:28:50 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:28:53.106578 | orchestrator | 2026-03-27 01:28:53 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:28:55.411934 | orchestrator | 2026-03-27 01:28:55 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:28:57.680636 | orchestrator | 2026-03-27 01:28:57 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:28:59.981629 | orchestrator | 2026-03-27 01:28:59 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:29:02.290397 | orchestrator | 2026-03-27 01:29:02 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:29:04.558530 | orchestrator | 2026-03-27 01:29:04 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:29:07.359828 | orchestrator | 2026-03-27 01:29:07 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) completed with status ACTIVE 2026-03-27 01:29:07.359910 | orchestrator | 2026-03-27 01:29:07 | INFO  | Live migrating server f19fbfb6-0649-428b-abab-bafb64f00dbb 2026-03-27 01:29:19.155338 | orchestrator | 2026-03-27 01:29:19 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:29:21.562161 | orchestrator | 2026-03-27 01:29:21 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:29:24.014780 | orchestrator | 2026-03-27 01:29:24 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:29:26.392322 | orchestrator | 2026-03-27 01:29:26 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:29:28.808274 | orchestrator | 2026-03-27 01:29:28 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:29:31.221012 | orchestrator | 2026-03-27 01:29:31 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:29:33.547890 | orchestrator | 2026-03-27 01:29:33 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:29:35.892131 | orchestrator | 2026-03-27 01:29:35 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:29:38.172051 | orchestrator | 2026-03-27 01:29:38 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:29:40.466065 | orchestrator | 2026-03-27 01:29:40 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) completed with status ACTIVE 2026-03-27 01:29:40.797370 | orchestrator | + compute_list 2026-03-27 01:29:40.797425 | orchestrator | + osism manage compute list testbed-node-3 2026-03-27 01:29:42.563504 | orchestrator | 2026-03-27 01:29:42 | ERROR  | Unable to get ansible vault password 2026-03-27 01:29:42.563602 | orchestrator | 2026-03-27 01:29:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:29:42.563615 | orchestrator | 2026-03-27 01:29:42 | ERROR  | Dropping encrypted entries 2026-03-27 01:29:44.096936 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:29:44.098245 | orchestrator | | ID | Name | Status | 2026-03-27 01:29:44.098304 | orchestrator | |--------------------------------------+--------+----------| 2026-03-27 01:29:44.098313 | orchestrator | | 603639ea-0672-4b4a-9398-906fc860df0c | test-4 | ACTIVE | 2026-03-27 01:29:44.098319 | orchestrator | | cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d | test-3 | ACTIVE | 2026-03-27 01:29:44.098326 | orchestrator | | 1d8278fb-a580-4a9a-9140-3ac30647656d | test-2 | ACTIVE | 2026-03-27 01:29:44.098332 | orchestrator | | 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 | test-1 | ACTIVE | 2026-03-27 01:29:44.098338 | orchestrator | | f19fbfb6-0649-428b-abab-bafb64f00dbb | test | ACTIVE | 2026-03-27 01:29:44.098345 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:29:44.402922 | orchestrator | + osism manage compute list testbed-node-4 2026-03-27 01:29:45.968751 | orchestrator | 2026-03-27 01:29:45 | ERROR  | Unable to get ansible vault password 2026-03-27 01:29:45.968838 | orchestrator | 2026-03-27 01:29:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:29:45.968853 | orchestrator | 2026-03-27 01:29:45 | ERROR  | Dropping encrypted entries 2026-03-27 01:29:47.051950 | orchestrator | +------+--------+----------+ 2026-03-27 01:29:47.052034 | orchestrator | | ID | Name | Status | 2026-03-27 01:29:47.052044 | orchestrator | |------+--------+----------| 2026-03-27 01:29:47.052052 | orchestrator | +------+--------+----------+ 2026-03-27 01:29:47.333288 | orchestrator | + osism manage compute list testbed-node-5 2026-03-27 01:29:48.876698 | orchestrator | 2026-03-27 01:29:48 | ERROR  | Unable to get ansible vault password 2026-03-27 01:29:48.876790 | orchestrator | 2026-03-27 01:29:48 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:29:48.876804 | orchestrator | 2026-03-27 01:29:48 | ERROR  | Dropping encrypted entries 2026-03-27 01:29:49.964203 | orchestrator | +------+--------+----------+ 2026-03-27 01:29:50.131778 | orchestrator | | ID | Name | Status | 2026-03-27 01:29:50.131855 | orchestrator | |------+--------+----------| 2026-03-27 01:29:50.131861 | orchestrator | +------+--------+----------+ 2026-03-27 01:29:50.279015 | orchestrator | + server_ping 2026-03-27 01:29:50.280445 | orchestrator | ++ tr -d '\r' 2026-03-27 01:29:50.280493 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-27 01:29:52.950707 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:29:52.950783 | orchestrator | + ping -c3 192.168.112.148 2026-03-27 01:29:52.961028 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2026-03-27 01:29:52.961123 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=8.69 ms 2026-03-27 01:29:53.956834 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.18 ms 2026-03-27 01:29:54.958214 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.86 ms 2026-03-27 01:29:54.958312 | orchestrator | 2026-03-27 01:29:54.958322 | orchestrator | --- 192.168.112.148 ping statistics --- 2026-03-27 01:29:54.958329 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:29:54.958335 | orchestrator | rtt min/avg/max/mdev = 1.860/4.243/8.693/3.149 ms 2026-03-27 01:29:54.958923 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:29:54.958983 | orchestrator | + ping -c3 192.168.112.135 2026-03-27 01:29:54.965928 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2026-03-27 01:29:54.966000 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=5.06 ms 2026-03-27 01:29:55.965554 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=2.29 ms 2026-03-27 01:29:56.966852 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.36 ms 2026-03-27 01:29:56.966938 | orchestrator | 2026-03-27 01:29:56.966947 | orchestrator | --- 192.168.112.135 ping statistics --- 2026-03-27 01:29:56.966953 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-27 01:29:56.966957 | orchestrator | rtt min/avg/max/mdev = 1.363/2.902/5.057/1.569 ms 2026-03-27 01:29:56.966962 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:29:56.966967 | orchestrator | + ping -c3 192.168.112.156 2026-03-27 01:29:56.977511 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-03-27 01:29:56.977590 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=5.76 ms 2026-03-27 01:29:57.976124 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.16 ms 2026-03-27 01:29:58.977682 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.57 ms 2026-03-27 01:29:58.977754 | orchestrator | 2026-03-27 01:29:58.977761 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-03-27 01:29:58.977767 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-27 01:29:58.977771 | orchestrator | rtt min/avg/max/mdev = 1.568/3.162/5.757/1.850 ms 2026-03-27 01:29:58.977776 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:29:58.977781 | orchestrator | + ping -c3 192.168.112.134 2026-03-27 01:29:58.986467 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-27 01:29:58.986538 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=4.26 ms 2026-03-27 01:29:59.985878 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=1.90 ms 2026-03-27 01:30:00.987007 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.34 ms 2026-03-27 01:30:00.987084 | orchestrator | 2026-03-27 01:30:00.987093 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-27 01:30:00.987100 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:30:00.987106 | orchestrator | rtt min/avg/max/mdev = 1.340/2.500/4.262/1.266 ms 2026-03-27 01:30:00.987112 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:30:00.987118 | orchestrator | + ping -c3 192.168.112.187 2026-03-27 01:30:00.996598 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-03-27 01:30:00.996672 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=5.18 ms 2026-03-27 01:30:01.995120 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.06 ms 2026-03-27 01:30:02.996260 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.75 ms 2026-03-27 01:30:02.996337 | orchestrator | 2026-03-27 01:30:02.996345 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-03-27 01:30:02.996351 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:30:02.996355 | orchestrator | rtt min/avg/max/mdev = 1.754/2.995/5.176/1.546 ms 2026-03-27 01:30:02.997485 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-03-27 01:30:04.564713 | orchestrator | 2026-03-27 01:30:04 | ERROR  | Unable to get ansible vault password 2026-03-27 01:30:04.564805 | orchestrator | 2026-03-27 01:30:04 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:30:04.564813 | orchestrator | 2026-03-27 01:30:04 | ERROR  | Dropping encrypted entries 2026-03-27 01:30:06.071656 | orchestrator | 2026-03-27 01:30:06 | INFO  | Live migrating server 603639ea-0672-4b4a-9398-906fc860df0c 2026-03-27 01:30:18.006110 | orchestrator | 2026-03-27 01:30:18 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:30:20.396180 | orchestrator | 2026-03-27 01:30:20 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:30:22.715038 | orchestrator | 2026-03-27 01:30:22 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:30:25.001680 | orchestrator | 2026-03-27 01:30:25 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:30:27.259558 | orchestrator | 2026-03-27 01:30:27 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:30:29.514338 | orchestrator | 2026-03-27 01:30:29 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:30:31.958602 | orchestrator | 2026-03-27 01:30:31 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:30:34.220505 | orchestrator | 2026-03-27 01:30:34 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:30:36.497919 | orchestrator | 2026-03-27 01:30:36 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) completed with status ACTIVE 2026-03-27 01:30:36.498063 | orchestrator | 2026-03-27 01:30:36 | INFO  | Live migrating server cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d 2026-03-27 01:30:45.752544 | orchestrator | 2026-03-27 01:30:45 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:30:48.119698 | orchestrator | 2026-03-27 01:30:48 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:30:50.498548 | orchestrator | 2026-03-27 01:30:50 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:30:52.842207 | orchestrator | 2026-03-27 01:30:52 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:30:55.128001 | orchestrator | 2026-03-27 01:30:55 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:30:57.325608 | orchestrator | 2026-03-27 01:30:57 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:30:59.649901 | orchestrator | 2026-03-27 01:30:59 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:31:01.927393 | orchestrator | 2026-03-27 01:31:01 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:31:04.240106 | orchestrator | 2026-03-27 01:31:04 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) completed with status ACTIVE 2026-03-27 01:31:04.240175 | orchestrator | 2026-03-27 01:31:04 | INFO  | Live migrating server 1d8278fb-a580-4a9a-9140-3ac30647656d 2026-03-27 01:31:14.343717 | orchestrator | 2026-03-27 01:31:14 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:31:16.655445 | orchestrator | 2026-03-27 01:31:16 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:31:18.974262 | orchestrator | 2026-03-27 01:31:18 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:31:21.389033 | orchestrator | 2026-03-27 01:31:21 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:31:23.644852 | orchestrator | 2026-03-27 01:31:23 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:31:25.935337 | orchestrator | 2026-03-27 01:31:25 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:31:28.104998 | orchestrator | 2026-03-27 01:31:28 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:31:30.330923 | orchestrator | 2026-03-27 01:31:30 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:31:32.578082 | orchestrator | 2026-03-27 01:31:32 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) completed with status ACTIVE 2026-03-27 01:31:32.578191 | orchestrator | 2026-03-27 01:31:32 | INFO  | Live migrating server 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 2026-03-27 01:31:42.302708 | orchestrator | 2026-03-27 01:31:42 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:31:44.689256 | orchestrator | 2026-03-27 01:31:44 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:31:47.114603 | orchestrator | 2026-03-27 01:31:47 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:31:49.448749 | orchestrator | 2026-03-27 01:31:49 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:31:51.743164 | orchestrator | 2026-03-27 01:31:51 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:31:54.107307 | orchestrator | 2026-03-27 01:31:54 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:31:56.494454 | orchestrator | 2026-03-27 01:31:56 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:31:58.805044 | orchestrator | 2026-03-27 01:31:58 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:32:01.143725 | orchestrator | 2026-03-27 01:32:01 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) completed with status ACTIVE 2026-03-27 01:32:01.143857 | orchestrator | 2026-03-27 01:32:01 | INFO  | Live migrating server f19fbfb6-0649-428b-abab-bafb64f00dbb 2026-03-27 01:32:11.508072 | orchestrator | 2026-03-27 01:32:11 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:13.948268 | orchestrator | 2026-03-27 01:32:13 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:16.334422 | orchestrator | 2026-03-27 01:32:16 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:18.742737 | orchestrator | 2026-03-27 01:32:18 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:21.120937 | orchestrator | 2026-03-27 01:32:21 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:23.399051 | orchestrator | 2026-03-27 01:32:23 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:25.718694 | orchestrator | 2026-03-27 01:32:25 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:28.013377 | orchestrator | 2026-03-27 01:32:28 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:30.298428 | orchestrator | 2026-03-27 01:32:30 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:32.614321 | orchestrator | 2026-03-27 01:32:32 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:32:34.995283 | orchestrator | 2026-03-27 01:32:34 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) completed with status ACTIVE 2026-03-27 01:32:35.334092 | orchestrator | + compute_list 2026-03-27 01:32:35.557046 | orchestrator | + osism manage compute list testbed-node-3 2026-03-27 01:32:37.024821 | orchestrator | 2026-03-27 01:32:37 | ERROR  | Unable to get ansible vault password 2026-03-27 01:32:37.024989 | orchestrator | 2026-03-27 01:32:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:32:37.025020 | orchestrator | 2026-03-27 01:32:37 | ERROR  | Dropping encrypted entries 2026-03-27 01:32:38.381655 | orchestrator | +------+--------+----------+ 2026-03-27 01:32:38.381730 | orchestrator | | ID | Name | Status | 2026-03-27 01:32:38.381739 | orchestrator | |------+--------+----------| 2026-03-27 01:32:38.381745 | orchestrator | +------+--------+----------+ 2026-03-27 01:32:38.756166 | orchestrator | + osism manage compute list testbed-node-4 2026-03-27 01:32:40.397391 | orchestrator | 2026-03-27 01:32:40 | ERROR  | Unable to get ansible vault password 2026-03-27 01:32:40.826276 | orchestrator | 2026-03-27 01:32:40 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:32:40.826356 | orchestrator | 2026-03-27 01:32:40 | ERROR  | Dropping encrypted entries 2026-03-27 01:32:42.165204 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:32:42.165293 | orchestrator | | ID | Name | Status | 2026-03-27 01:32:42.165319 | orchestrator | |--------------------------------------+--------+----------| 2026-03-27 01:32:42.165335 | orchestrator | | 603639ea-0672-4b4a-9398-906fc860df0c | test-4 | ACTIVE | 2026-03-27 01:32:42.165342 | orchestrator | | cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d | test-3 | ACTIVE | 2026-03-27 01:32:42.165349 | orchestrator | | 1d8278fb-a580-4a9a-9140-3ac30647656d | test-2 | ACTIVE | 2026-03-27 01:32:42.165355 | orchestrator | | 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 | test-1 | ACTIVE | 2026-03-27 01:32:42.165363 | orchestrator | | f19fbfb6-0649-428b-abab-bafb64f00dbb | test | ACTIVE | 2026-03-27 01:32:42.165369 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:32:42.498785 | orchestrator | + osism manage compute list testbed-node-5 2026-03-27 01:32:44.131637 | orchestrator | 2026-03-27 01:32:44 | ERROR  | Unable to get ansible vault password 2026-03-27 01:32:44.131707 | orchestrator | 2026-03-27 01:32:44 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:32:44.131716 | orchestrator | 2026-03-27 01:32:44 | ERROR  | Dropping encrypted entries 2026-03-27 01:32:45.242378 | orchestrator | +------+--------+----------+ 2026-03-27 01:32:45.242463 | orchestrator | | ID | Name | Status | 2026-03-27 01:32:45.242471 | orchestrator | |------+--------+----------| 2026-03-27 01:32:45.242478 | orchestrator | +------+--------+----------+ 2026-03-27 01:32:45.614135 | orchestrator | + server_ping 2026-03-27 01:32:45.645922 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-27 01:32:45.645996 | orchestrator | ++ tr -d '\r' 2026-03-27 01:32:48.682932 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:32:48.683039 | orchestrator | + ping -c3 192.168.112.148 2026-03-27 01:32:48.691869 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2026-03-27 01:32:48.691965 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=5.64 ms 2026-03-27 01:32:49.690271 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.82 ms 2026-03-27 01:32:50.691472 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.64 ms 2026-03-27 01:32:50.691637 | orchestrator | 2026-03-27 01:32:50.691652 | orchestrator | --- 192.168.112.148 ping statistics --- 2026-03-27 01:32:50.691662 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:32:50.691671 | orchestrator | rtt min/avg/max/mdev = 1.639/3.365/5.639/1.678 ms 2026-03-27 01:32:50.692038 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:32:50.692053 | orchestrator | + ping -c3 192.168.112.135 2026-03-27 01:32:50.705392 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2026-03-27 01:32:50.705516 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=8.67 ms 2026-03-27 01:32:51.701526 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=2.90 ms 2026-03-27 01:32:52.701889 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.83 ms 2026-03-27 01:32:52.702005 | orchestrator | 2026-03-27 01:32:52.702106 | orchestrator | --- 192.168.112.135 ping statistics --- 2026-03-27 01:32:52.702130 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:32:52.702225 | orchestrator | rtt min/avg/max/mdev = 1.825/4.466/8.672/3.006 ms 2026-03-27 01:32:52.702533 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:32:52.702607 | orchestrator | + ping -c3 192.168.112.156 2026-03-27 01:32:52.717064 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-03-27 01:32:52.717169 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=9.85 ms 2026-03-27 01:32:53.710293 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=1.98 ms 2026-03-27 01:32:54.711621 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.63 ms 2026-03-27 01:32:54.711685 | orchestrator | 2026-03-27 01:32:54.711692 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-03-27 01:32:54.711698 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-27 01:32:54.711702 | orchestrator | rtt min/avg/max/mdev = 1.627/4.483/9.848/3.795 ms 2026-03-27 01:32:54.711708 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:32:54.711712 | orchestrator | + ping -c3 192.168.112.134 2026-03-27 01:32:54.723315 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-27 01:32:54.723435 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=7.51 ms 2026-03-27 01:32:55.720289 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.45 ms 2026-03-27 01:32:56.722210 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.90 ms 2026-03-27 01:32:56.722456 | orchestrator | 2026-03-27 01:32:56.722477 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-27 01:32:56.722494 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-27 01:32:56.722511 | orchestrator | rtt min/avg/max/mdev = 1.902/3.955/7.514/2.526 ms 2026-03-27 01:32:56.722526 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:32:56.722542 | orchestrator | + ping -c3 192.168.112.187 2026-03-27 01:32:56.735410 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-03-27 01:32:56.735500 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=8.10 ms 2026-03-27 01:32:57.731472 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.76 ms 2026-03-27 01:32:58.733399 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=2.05 ms 2026-03-27 01:32:58.733465 | orchestrator | 2026-03-27 01:32:58.733472 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-03-27 01:32:58.733499 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-27 01:32:58.733504 | orchestrator | rtt min/avg/max/mdev = 2.051/4.305/8.100/2.699 ms 2026-03-27 01:32:58.733509 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-03-27 01:33:00.445188 | orchestrator | 2026-03-27 01:33:00 | ERROR  | Unable to get ansible vault password 2026-03-27 01:33:00.445266 | orchestrator | 2026-03-27 01:33:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:33:00.445274 | orchestrator | 2026-03-27 01:33:00 | ERROR  | Dropping encrypted entries 2026-03-27 01:33:02.066980 | orchestrator | 2026-03-27 01:33:02 | INFO  | Live migrating server 603639ea-0672-4b4a-9398-906fc860df0c 2026-03-27 01:33:13.993728 | orchestrator | 2026-03-27 01:33:13 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:33:16.329015 | orchestrator | 2026-03-27 01:33:16 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:33:18.686186 | orchestrator | 2026-03-27 01:33:18 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:33:21.003513 | orchestrator | 2026-03-27 01:33:21 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:33:23.364499 | orchestrator | 2026-03-27 01:33:23 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:33:25.648821 | orchestrator | 2026-03-27 01:33:25 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:33:27.984020 | orchestrator | 2026-03-27 01:33:27 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:33:30.313451 | orchestrator | 2026-03-27 01:33:30 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:33:32.623697 | orchestrator | 2026-03-27 01:33:32 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) is still in progress 2026-03-27 01:33:34.990672 | orchestrator | 2026-03-27 01:33:34 | INFO  | Live migration of 603639ea-0672-4b4a-9398-906fc860df0c (test-4) completed with status ACTIVE 2026-03-27 01:33:34.990767 | orchestrator | 2026-03-27 01:33:34 | INFO  | Live migrating server cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d 2026-03-27 01:33:45.231587 | orchestrator | 2026-03-27 01:33:45 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:33:47.557517 | orchestrator | 2026-03-27 01:33:47 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:33:49.906189 | orchestrator | 2026-03-27 01:33:49 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:33:52.157133 | orchestrator | 2026-03-27 01:33:52 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:33:54.400699 | orchestrator | 2026-03-27 01:33:54 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:33:56.724027 | orchestrator | 2026-03-27 01:33:56 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:33:59.072216 | orchestrator | 2026-03-27 01:33:59 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:34:01.377195 | orchestrator | 2026-03-27 01:34:01 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) is still in progress 2026-03-27 01:34:03.765699 | orchestrator | 2026-03-27 01:34:03 | INFO  | Live migration of cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d (test-3) completed with status ACTIVE 2026-03-27 01:34:03.765774 | orchestrator | 2026-03-27 01:34:03 | INFO  | Live migrating server 1d8278fb-a580-4a9a-9140-3ac30647656d 2026-03-27 01:34:13.919914 | orchestrator | 2026-03-27 01:34:13 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:34:16.258188 | orchestrator | 2026-03-27 01:34:16 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:34:18.588017 | orchestrator | 2026-03-27 01:34:18 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:34:21.085480 | orchestrator | 2026-03-27 01:34:21 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:34:23.507641 | orchestrator | 2026-03-27 01:34:23 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:34:26.004820 | orchestrator | 2026-03-27 01:34:26 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:34:28.302253 | orchestrator | 2026-03-27 01:34:28 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:34:30.535128 | orchestrator | 2026-03-27 01:34:30 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) is still in progress 2026-03-27 01:34:33.026584 | orchestrator | 2026-03-27 01:34:33 | INFO  | Live migration of 1d8278fb-a580-4a9a-9140-3ac30647656d (test-2) completed with status ACTIVE 2026-03-27 01:34:33.026734 | orchestrator | 2026-03-27 01:34:33 | INFO  | Live migrating server 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 2026-03-27 01:34:43.314950 | orchestrator | 2026-03-27 01:34:43 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:34:45.677469 | orchestrator | 2026-03-27 01:34:45 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:34:48.028046 | orchestrator | 2026-03-27 01:34:48 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:34:50.286799 | orchestrator | 2026-03-27 01:34:50 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:34:52.655208 | orchestrator | 2026-03-27 01:34:52 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:34:54.919290 | orchestrator | 2026-03-27 01:34:54 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:34:57.217476 | orchestrator | 2026-03-27 01:34:57 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:34:59.527220 | orchestrator | 2026-03-27 01:34:59 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) is still in progress 2026-03-27 01:35:01.802227 | orchestrator | 2026-03-27 01:35:01 | INFO  | Live migration of 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 (test-1) completed with status ACTIVE 2026-03-27 01:35:01.802295 | orchestrator | 2026-03-27 01:35:01 | INFO  | Live migrating server f19fbfb6-0649-428b-abab-bafb64f00dbb 2026-03-27 01:35:12.351762 | orchestrator | 2026-03-27 01:35:12 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:14.721451 | orchestrator | 2026-03-27 01:35:14 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:17.099732 | orchestrator | 2026-03-27 01:35:17 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:19.484496 | orchestrator | 2026-03-27 01:35:19 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:21.800489 | orchestrator | 2026-03-27 01:35:21 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:24.182197 | orchestrator | 2026-03-27 01:35:24 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:26.464527 | orchestrator | 2026-03-27 01:35:26 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:28.770336 | orchestrator | 2026-03-27 01:35:28 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:31.072782 | orchestrator | 2026-03-27 01:35:31 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:33.385819 | orchestrator | 2026-03-27 01:35:33 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) is still in progress 2026-03-27 01:35:35.771570 | orchestrator | 2026-03-27 01:35:35 | INFO  | Live migration of f19fbfb6-0649-428b-abab-bafb64f00dbb (test) completed with status ACTIVE 2026-03-27 01:35:35.991166 | orchestrator | + compute_list 2026-03-27 01:35:35.991230 | orchestrator | + osism manage compute list testbed-node-3 2026-03-27 01:35:37.650889 | orchestrator | 2026-03-27 01:35:37 | ERROR  | Unable to get ansible vault password 2026-03-27 01:35:37.650991 | orchestrator | 2026-03-27 01:35:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:35:37.651012 | orchestrator | 2026-03-27 01:35:37 | ERROR  | Dropping encrypted entries 2026-03-27 01:35:38.819397 | orchestrator | +------+--------+----------+ 2026-03-27 01:35:38.819498 | orchestrator | | ID | Name | Status | 2026-03-27 01:35:38.819516 | orchestrator | |------+--------+----------| 2026-03-27 01:35:38.819529 | orchestrator | +------+--------+----------+ 2026-03-27 01:35:39.135310 | orchestrator | + osism manage compute list testbed-node-4 2026-03-27 01:35:40.653483 | orchestrator | 2026-03-27 01:35:40 | ERROR  | Unable to get ansible vault password 2026-03-27 01:35:40.653564 | orchestrator | 2026-03-27 01:35:40 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:35:40.653574 | orchestrator | 2026-03-27 01:35:40 | ERROR  | Dropping encrypted entries 2026-03-27 01:35:41.846452 | orchestrator | +------+--------+----------+ 2026-03-27 01:35:41.846529 | orchestrator | | ID | Name | Status | 2026-03-27 01:35:41.846536 | orchestrator | |------+--------+----------| 2026-03-27 01:35:41.846541 | orchestrator | +------+--------+----------+ 2026-03-27 01:35:42.167256 | orchestrator | + osism manage compute list testbed-node-5 2026-03-27 01:35:43.762710 | orchestrator | 2026-03-27 01:35:43 | ERROR  | Unable to get ansible vault password 2026-03-27 01:35:43.762787 | orchestrator | 2026-03-27 01:35:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-27 01:35:43.762800 | orchestrator | 2026-03-27 01:35:43 | ERROR  | Dropping encrypted entries 2026-03-27 01:35:45.380192 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:35:45.380278 | orchestrator | | ID | Name | Status | 2026-03-27 01:35:45.380289 | orchestrator | |--------------------------------------+--------+----------| 2026-03-27 01:35:45.380297 | orchestrator | | 603639ea-0672-4b4a-9398-906fc860df0c | test-4 | ACTIVE | 2026-03-27 01:35:45.380306 | orchestrator | | cfff0be1-09b5-4d6d-a6ae-08ebe4d2f73d | test-3 | ACTIVE | 2026-03-27 01:35:45.380340 | orchestrator | | 1d8278fb-a580-4a9a-9140-3ac30647656d | test-2 | ACTIVE | 2026-03-27 01:35:45.380349 | orchestrator | | 6a4dc4de-4e99-4054-ba7d-2c13e0913fe4 | test-1 | ACTIVE | 2026-03-27 01:35:45.380357 | orchestrator | | f19fbfb6-0649-428b-abab-bafb64f00dbb | test | ACTIVE | 2026-03-27 01:35:45.380365 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-27 01:35:45.702356 | orchestrator | + server_ping 2026-03-27 01:35:45.703302 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-27 01:35:45.703358 | orchestrator | ++ tr -d '\r' 2026-03-27 01:35:48.853328 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:35:48.853400 | orchestrator | + ping -c3 192.168.112.148 2026-03-27 01:35:48.862762 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2026-03-27 01:35:48.862877 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=5.87 ms 2026-03-27 01:35:49.860688 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.30 ms 2026-03-27 01:35:50.862315 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.92 ms 2026-03-27 01:35:50.863054 | orchestrator | 2026-03-27 01:35:50.863078 | orchestrator | --- 192.168.112.148 ping statistics --- 2026-03-27 01:35:50.863085 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:35:50.863091 | orchestrator | rtt min/avg/max/mdev = 1.923/3.363/5.872/1.780 ms 2026-03-27 01:35:50.863105 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:35:50.863110 | orchestrator | + ping -c3 192.168.112.135 2026-03-27 01:35:50.872368 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2026-03-27 01:35:50.872470 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=4.52 ms 2026-03-27 01:35:51.872365 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=2.46 ms 2026-03-27 01:35:52.874125 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=2.03 ms 2026-03-27 01:35:52.874208 | orchestrator | 2026-03-27 01:35:52.874216 | orchestrator | --- 192.168.112.135 ping statistics --- 2026-03-27 01:35:52.874222 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-27 01:35:52.874228 | orchestrator | rtt min/avg/max/mdev = 2.034/3.004/4.516/1.083 ms 2026-03-27 01:35:52.874233 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:35:52.874239 | orchestrator | + ping -c3 192.168.112.156 2026-03-27 01:35:52.886516 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-03-27 01:35:52.886583 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=8.58 ms 2026-03-27 01:35:53.882990 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.95 ms 2026-03-27 01:35:54.883037 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.78 ms 2026-03-27 01:35:54.883120 | orchestrator | 2026-03-27 01:35:54.883131 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-03-27 01:35:54.883139 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:35:54.883146 | orchestrator | rtt min/avg/max/mdev = 1.776/4.434/8.576/2.967 ms 2026-03-27 01:35:54.883684 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:35:54.883723 | orchestrator | + ping -c3 192.168.112.134 2026-03-27 01:35:54.896573 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2026-03-27 01:35:54.896676 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=8.41 ms 2026-03-27 01:35:55.892293 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.53 ms 2026-03-27 01:35:56.894268 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.94 ms 2026-03-27 01:35:56.894386 | orchestrator | 2026-03-27 01:35:56.894411 | orchestrator | --- 192.168.112.134 ping statistics --- 2026-03-27 01:35:56.894431 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:35:56.894450 | orchestrator | rtt min/avg/max/mdev = 1.944/4.293/8.407/2.918 ms 2026-03-27 01:35:56.894471 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-27 01:35:56.894540 | orchestrator | + ping -c3 192.168.112.187 2026-03-27 01:35:56.904405 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-03-27 01:35:56.904486 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=5.84 ms 2026-03-27 01:35:57.904063 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=3.01 ms 2026-03-27 01:35:58.904196 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.89 ms 2026-03-27 01:35:58.904294 | orchestrator | 2026-03-27 01:35:58.904311 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-03-27 01:35:58.904325 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-27 01:35:58.904336 | orchestrator | rtt min/avg/max/mdev = 1.891/3.578/5.835/1.659 ms 2026-03-27 01:35:59.006309 | orchestrator | ok: Runtime: 0:19:32.073386 2026-03-27 01:35:59.048611 | 2026-03-27 01:35:59.048767 | TASK [Run tempest] 2026-03-27 01:35:59.828976 | orchestrator | + set -e 2026-03-27 01:35:59.829182 | orchestrator | + source /opt/manager-vars.sh 2026-03-27 01:35:59.829956 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-27 01:35:59.830004 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-27 01:35:59.830046 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-27 01:35:59.830060 | orchestrator | ++ CEPH_VERSION=reef 2026-03-27 01:35:59.830071 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-27 01:35:59.830109 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-27 01:35:59.830126 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-27 01:35:59.830144 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-27 01:35:59.830153 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-27 01:35:59.830169 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-27 01:35:59.830177 | orchestrator | ++ export ARA=false 2026-03-27 01:35:59.830185 | orchestrator | ++ ARA=false 2026-03-27 01:35:59.830197 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-27 01:35:59.830205 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-27 01:35:59.830222 | orchestrator | ++ export TEMPEST=true 2026-03-27 01:35:59.830234 | orchestrator | ++ TEMPEST=true 2026-03-27 01:35:59.830242 | orchestrator | ++ export IS_ZUUL=true 2026-03-27 01:35:59.830250 | orchestrator | ++ IS_ZUUL=true 2026-03-27 01:35:59.830260 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 01:35:59.830268 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.154 2026-03-27 01:35:59.830277 | orchestrator | ++ export EXTERNAL_API=false 2026-03-27 01:35:59.830285 | orchestrator | ++ EXTERNAL_API=false 2026-03-27 01:35:59.830293 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-27 01:35:59.830301 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-27 01:35:59.830309 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-27 01:35:59.830317 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-27 01:35:59.830325 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-27 01:35:59.830333 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-27 01:35:59.830554 | orchestrator | 2026-03-27 01:35:59.830578 | orchestrator | # Tempest 2026-03-27 01:35:59.830591 | orchestrator | 2026-03-27 01:35:59.830605 | orchestrator | + echo 2026-03-27 01:35:59.830617 | orchestrator | + echo '# Tempest' 2026-03-27 01:35:59.830631 | orchestrator | + echo 2026-03-27 01:35:59.830665 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-27 01:35:59.830679 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-27 01:36:11.278076 | orchestrator | 2026-03-27 01:36:11 | INFO  | Prepare task for execution of tempest. 2026-03-27 01:36:11.354689 | orchestrator | 2026-03-27 01:36:11 | INFO  | Task 1f2b880d-dc0e-4796-97b6-cd74e10e7839 (tempest) was prepared for execution. 2026-03-27 01:36:11.354789 | orchestrator | 2026-03-27 01:36:11 | INFO  | It takes a moment until task 1f2b880d-dc0e-4796-97b6-cd74e10e7839 (tempest) has been started and output is visible here. 2026-03-27 01:37:29.627660 | orchestrator | 2026-03-27 01:37:29.627880 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-27 01:37:29.627904 | orchestrator | 2026-03-27 01:37:29.627918 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-27 01:37:29.627944 | orchestrator | Friday 27 March 2026 01:36:14 +0000 (0:00:00.348) 0:00:00.348 ********** 2026-03-27 01:37:29.627958 | orchestrator | changed: [testbed-manager] 2026-03-27 01:37:29.627971 | orchestrator | 2026-03-27 01:37:29.627986 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-27 01:37:29.627999 | orchestrator | Friday 27 March 2026 01:36:16 +0000 (0:00:01.087) 0:00:01.436 ********** 2026-03-27 01:37:29.628013 | orchestrator | changed: [testbed-manager] 2026-03-27 01:37:29.628025 | orchestrator | 2026-03-27 01:37:29.628038 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-27 01:37:29.628051 | orchestrator | Friday 27 March 2026 01:36:17 +0000 (0:00:01.219) 0:00:02.655 ********** 2026-03-27 01:37:29.628064 | orchestrator | ok: [testbed-manager] 2026-03-27 01:37:29.628079 | orchestrator | 2026-03-27 01:37:29.628093 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-27 01:37:29.628107 | orchestrator | Friday 27 March 2026 01:36:17 +0000 (0:00:00.499) 0:00:03.155 ********** 2026-03-27 01:37:29.628121 | orchestrator | changed: [testbed-manager] 2026-03-27 01:37:29.628135 | orchestrator | 2026-03-27 01:37:29.628149 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-27 01:37:29.628163 | orchestrator | Friday 27 March 2026 01:36:40 +0000 (0:00:23.202) 0:00:26.357 ********** 2026-03-27 01:37:29.628237 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-27 01:37:29.628253 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-27 01:37:29.628285 | orchestrator | 2026-03-27 01:37:29.628302 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-27 01:37:29.628311 | orchestrator | Friday 27 March 2026 01:36:49 +0000 (0:00:08.615) 0:00:34.973 ********** 2026-03-27 01:37:29.628319 | orchestrator | ok: [testbed-manager] => { 2026-03-27 01:37:29.628327 | orchestrator |  "changed": false, 2026-03-27 01:37:29.628335 | orchestrator |  "msg": "All assertions passed" 2026-03-27 01:37:29.628344 | orchestrator | } 2026-03-27 01:37:29.628352 | orchestrator | 2026-03-27 01:37:29.628360 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-27 01:37:29.628368 | orchestrator | Friday 27 March 2026 01:36:49 +0000 (0:00:00.152) 0:00:35.126 ********** 2026-03-27 01:37:29.628376 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:37:29.628384 | orchestrator | 2026-03-27 01:37:29.628392 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-27 01:37:29.628400 | orchestrator | Friday 27 March 2026 01:36:53 +0000 (0:00:03.689) 0:00:38.816 ********** 2026-03-27 01:37:29.628408 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:37:29.628416 | orchestrator | 2026-03-27 01:37:29.628424 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-27 01:37:29.628432 | orchestrator | Friday 27 March 2026 01:36:55 +0000 (0:00:01.917) 0:00:40.733 ********** 2026-03-27 01:37:29.628440 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:37:29.628453 | orchestrator | 2026-03-27 01:37:29.628466 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-27 01:37:29.628478 | orchestrator | Friday 27 March 2026 01:36:59 +0000 (0:00:03.820) 0:00:44.554 ********** 2026-03-27 01:37:29.628491 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:37:29.628503 | orchestrator | 2026-03-27 01:37:29.628515 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-27 01:37:29.628529 | orchestrator | Friday 27 March 2026 01:36:59 +0000 (0:00:00.193) 0:00:44.748 ********** 2026-03-27 01:37:29.628542 | orchestrator | changed: [testbed-manager] 2026-03-27 01:37:29.628554 | orchestrator | 2026-03-27 01:37:29.628562 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-27 01:37:29.628570 | orchestrator | Friday 27 March 2026 01:37:01 +0000 (0:00:02.547) 0:00:47.295 ********** 2026-03-27 01:37:29.628578 | orchestrator | changed: [testbed-manager] 2026-03-27 01:37:29.628586 | orchestrator | 2026-03-27 01:37:29.628594 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-27 01:37:29.628601 | orchestrator | Friday 27 March 2026 01:37:10 +0000 (0:00:08.490) 0:00:55.786 ********** 2026-03-27 01:37:29.628609 | orchestrator | changed: [testbed-manager] 2026-03-27 01:37:29.628617 | orchestrator | 2026-03-27 01:37:29.628625 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-27 01:37:29.628633 | orchestrator | Friday 27 March 2026 01:37:11 +0000 (0:00:00.609) 0:00:56.396 ********** 2026-03-27 01:37:29.628640 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:37:29.628648 | orchestrator | 2026-03-27 01:37:29.628656 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-27 01:37:29.628664 | orchestrator | Friday 27 March 2026 01:37:12 +0000 (0:00:01.397) 0:00:57.793 ********** 2026-03-27 01:37:29.628671 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:37:29.628679 | orchestrator | 2026-03-27 01:37:29.628687 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-27 01:37:29.628695 | orchestrator | Friday 27 March 2026 01:37:13 +0000 (0:00:01.457) 0:00:59.250 ********** 2026-03-27 01:37:29.628703 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:37:29.628730 | orchestrator | 2026-03-27 01:37:29.628742 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-27 01:37:29.628761 | orchestrator | Friday 27 March 2026 01:37:14 +0000 (0:00:00.169) 0:00:59.420 ********** 2026-03-27 01:37:29.628769 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:37:29.628776 | orchestrator | 2026-03-27 01:37:29.628793 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-27 01:37:29.628801 | orchestrator | Friday 27 March 2026 01:37:14 +0000 (0:00:00.295) 0:00:59.715 ********** 2026-03-27 01:37:29.628809 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-27 01:37:29.628816 | orchestrator | 2026-03-27 01:37:29.628824 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-27 01:37:29.628867 | orchestrator | Friday 27 March 2026 01:37:18 +0000 (0:00:03.987) 0:01:03.703 ********** 2026-03-27 01:37:29.628880 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-27 01:37:29.628900 | orchestrator |  "changed": false, 2026-03-27 01:37:29.628914 | orchestrator |  "msg": "All assertions passed" 2026-03-27 01:37:29.628927 | orchestrator | } 2026-03-27 01:37:29.628941 | orchestrator | 2026-03-27 01:37:29.628957 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-27 01:37:29.628971 | orchestrator | Friday 27 March 2026 01:37:18 +0000 (0:00:00.191) 0:01:03.895 ********** 2026-03-27 01:37:29.628985 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-27 01:37:29.628998 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-27 01:37:29.629005 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:37:29.629013 | orchestrator | 2026-03-27 01:37:29.629021 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-27 01:37:29.629029 | orchestrator | Friday 27 March 2026 01:37:18 +0000 (0:00:00.212) 0:01:04.107 ********** 2026-03-27 01:37:29.629036 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:37:29.629044 | orchestrator | 2026-03-27 01:37:29.629052 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-27 01:37:29.629059 | orchestrator | Friday 27 March 2026 01:37:18 +0000 (0:00:00.152) 0:01:04.260 ********** 2026-03-27 01:37:29.629067 | orchestrator | ok: [testbed-manager] 2026-03-27 01:37:29.629075 | orchestrator | 2026-03-27 01:37:29.629083 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-27 01:37:29.629090 | orchestrator | Friday 27 March 2026 01:37:19 +0000 (0:00:00.458) 0:01:04.718 ********** 2026-03-27 01:37:29.629098 | orchestrator | changed: [testbed-manager] 2026-03-27 01:37:29.629106 | orchestrator | 2026-03-27 01:37:29.629114 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-27 01:37:29.629122 | orchestrator | Friday 27 March 2026 01:37:20 +0000 (0:00:00.863) 0:01:05.582 ********** 2026-03-27 01:37:29.629130 | orchestrator | ok: [testbed-manager] 2026-03-27 01:37:29.629137 | orchestrator | 2026-03-27 01:37:29.629145 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-27 01:37:29.629153 | orchestrator | Friday 27 March 2026 01:37:20 +0000 (0:00:00.457) 0:01:06.039 ********** 2026-03-27 01:37:29.629160 | orchestrator | skipping: [testbed-manager] 2026-03-27 01:37:29.629168 | orchestrator | 2026-03-27 01:37:29.629176 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-27 01:37:29.629183 | orchestrator | Friday 27 March 2026 01:37:20 +0000 (0:00:00.303) 0:01:06.342 ********** 2026-03-27 01:37:29.629191 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-27 01:37:29.629200 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-27 01:37:29.629207 | orchestrator | 2026-03-27 01:37:29.629215 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-27 01:37:29.629223 | orchestrator | Friday 27 March 2026 01:37:28 +0000 (0:00:07.765) 0:01:14.108 ********** 2026-03-27 01:37:29.629231 | orchestrator | changed: [testbed-manager] 2026-03-27 01:37:29.629246 | orchestrator | 2026-03-27 01:37:29.629254 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-27 01:37:29.629263 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-27 01:37:29.629272 | orchestrator | 2026-03-27 01:37:29.629279 | orchestrator | 2026-03-27 01:37:29.629287 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-27 01:37:29.629295 | orchestrator | Friday 27 March 2026 01:37:29 +0000 (0:00:00.891) 0:01:15.000 ********** 2026-03-27 01:37:29.629303 | orchestrator | =============================================================================== 2026-03-27 01:37:29.629310 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 23.20s 2026-03-27 01:37:29.629318 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.62s 2026-03-27 01:37:29.629326 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.49s 2026-03-27 01:37:29.629333 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.77s 2026-03-27 01:37:29.629346 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.99s 2026-03-27 01:37:29.629354 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.82s 2026-03-27 01:37:29.629362 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.69s 2026-03-27 01:37:29.629370 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.55s 2026-03-27 01:37:29.629378 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.92s 2026-03-27 01:37:29.629385 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.46s 2026-03-27 01:37:29.629393 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.40s 2026-03-27 01:37:29.629401 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.22s 2026-03-27 01:37:29.629409 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.09s 2026-03-27 01:37:29.629417 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 0.89s 2026-03-27 01:37:29.629424 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.86s 2026-03-27 01:37:29.629432 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.61s 2026-03-27 01:37:29.629440 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.50s 2026-03-27 01:37:29.629454 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.46s 2026-03-27 01:37:29.789853 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.46s 2026-03-27 01:37:29.789925 | orchestrator | osism.validations.tempest : Copy include list --------------------------- 0.30s 2026-03-27 01:37:29.911137 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-27 01:37:29.914818 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-27 01:37:29.918661 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-27 01:37:29.918766 | orchestrator | 2026-03-27 01:37:29.918776 | orchestrator | ## IDENTITY (API) 2026-03-27 01:37:29.918783 | orchestrator | 2026-03-27 01:37:29.918789 | orchestrator | + echo 2026-03-27 01:37:29.918796 | orchestrator | + echo '## IDENTITY (API)' 2026-03-27 01:37:29.918802 | orchestrator | + echo 2026-03-27 01:37:29.918809 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-27 01:37:29.918816 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-27 01:37:29.919515 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-27 01:37:29.920995 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-27 01:37:29.924584 | orchestrator | + tee -a /opt/tempest/20260327-0137.log 2026-03-27 01:37:33.808385 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-27 01:37:33.808491 | orchestrator | Did you mean one of these? 2026-03-27 01:37:33.808502 | orchestrator | help 2026-03-27 01:37:33.808509 | orchestrator | init 2026-03-27 01:37:34.163218 | orchestrator | 2026-03-27 01:37:34.163299 | orchestrator | ## IMAGE (API) 2026-03-27 01:37:34.163307 | orchestrator | 2026-03-27 01:37:34.163312 | orchestrator | + echo 2026-03-27 01:37:34.163316 | orchestrator | + echo '## IMAGE (API)' 2026-03-27 01:37:34.163322 | orchestrator | + echo 2026-03-27 01:37:34.163326 | orchestrator | + _tempest tempest.api.image.v2 2026-03-27 01:37:34.163331 | orchestrator | + local regex=tempest.api.image.v2 2026-03-27 01:37:34.164470 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-27 01:37:34.164525 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-27 01:37:34.166375 | orchestrator | + tee -a /opt/tempest/20260327-0137.log 2026-03-27 01:37:37.699654 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-27 01:37:37.699799 | orchestrator | Did you mean one of these? 2026-03-27 01:37:37.699813 | orchestrator | help 2026-03-27 01:37:37.699821 | orchestrator | init 2026-03-27 01:37:37.976578 | orchestrator | 2026-03-27 01:37:37.976677 | orchestrator | ## NETWORK (API) 2026-03-27 01:37:37.976690 | orchestrator | 2026-03-27 01:37:37.976701 | orchestrator | + echo 2026-03-27 01:37:37.976711 | orchestrator | + echo '## NETWORK (API)' 2026-03-27 01:37:37.976770 | orchestrator | + echo 2026-03-27 01:37:37.976782 | orchestrator | + _tempest tempest.api.network 2026-03-27 01:37:37.976792 | orchestrator | + local regex=tempest.api.network 2026-03-27 01:37:37.977296 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-27 01:37:37.977387 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-27 01:37:37.979828 | orchestrator | + tee -a /opt/tempest/20260327-0137.log 2026-03-27 01:37:41.162302 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-27 01:37:41.162407 | orchestrator | Did you mean one of these? 2026-03-27 01:37:41.162415 | orchestrator | help 2026-03-27 01:37:41.162420 | orchestrator | init 2026-03-27 01:37:41.434679 | orchestrator | 2026-03-27 01:37:41.434823 | orchestrator | ## VOLUME (API) 2026-03-27 01:37:41.434849 | orchestrator | 2026-03-27 01:37:41.434868 | orchestrator | + echo 2026-03-27 01:37:41.434886 | orchestrator | + echo '## VOLUME (API)' 2026-03-27 01:37:41.434905 | orchestrator | + echo 2026-03-27 01:37:41.434924 | orchestrator | + _tempest tempest.api.volume 2026-03-27 01:37:41.434941 | orchestrator | + local regex=tempest.api.volume 2026-03-27 01:37:41.435616 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-27 01:37:41.435672 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-27 01:37:41.440169 | orchestrator | + tee -a /opt/tempest/20260327-0137.log 2026-03-27 01:37:44.784973 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-27 01:37:44.785087 | orchestrator | Did you mean one of these? 2026-03-27 01:37:44.785097 | orchestrator | help 2026-03-27 01:37:44.785102 | orchestrator | init 2026-03-27 01:37:45.025114 | orchestrator | 2026-03-27 01:37:45.025293 | orchestrator | ## COMPUTE (API) 2026-03-27 01:37:45.025328 | orchestrator | 2026-03-27 01:37:45.025449 | orchestrator | + echo 2026-03-27 01:37:45.025470 | orchestrator | + echo '## COMPUTE (API)' 2026-03-27 01:37:45.025488 | orchestrator | + echo 2026-03-27 01:37:45.025616 | orchestrator | + _tempest tempest.api.compute 2026-03-27 01:37:45.025681 | orchestrator | + local regex=tempest.api.compute 2026-03-27 01:37:45.025963 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-27 01:37:45.026101 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-27 01:37:45.028929 | orchestrator | + tee -a /opt/tempest/20260327-0137.log 2026-03-27 01:37:48.333624 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-27 01:37:48.333716 | orchestrator | Did you mean one of these? 2026-03-27 01:37:48.333751 | orchestrator | help 2026-03-27 01:37:48.333762 | orchestrator | init 2026-03-27 01:37:48.689050 | orchestrator | 2026-03-27 01:37:48.689145 | orchestrator | ## DNS (API) 2026-03-27 01:37:48.689153 | orchestrator | 2026-03-27 01:37:48.689160 | orchestrator | + echo 2026-03-27 01:37:48.689169 | orchestrator | + echo '## DNS (API)' 2026-03-27 01:37:48.689177 | orchestrator | + echo 2026-03-27 01:37:48.689291 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-27 01:37:48.689299 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-27 01:37:48.689365 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-27 01:37:48.692779 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-27 01:37:48.694505 | orchestrator | + tee -a /opt/tempest/20260327-0137.log 2026-03-27 01:37:52.342897 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-27 01:37:52.343045 | orchestrator | Did you mean one of these? 2026-03-27 01:37:52.343061 | orchestrator | help 2026-03-27 01:37:52.343071 | orchestrator | init 2026-03-27 01:37:52.701360 | orchestrator | 2026-03-27 01:37:52.701423 | orchestrator | ## OBJECT-STORE (API) 2026-03-27 01:37:52.701430 | orchestrator | 2026-03-27 01:37:52.701435 | orchestrator | + echo 2026-03-27 01:37:52.701440 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-27 01:37:52.701445 | orchestrator | + echo 2026-03-27 01:37:52.701451 | orchestrator | + _tempest tempest.api.object_storage 2026-03-27 01:37:52.701457 | orchestrator | + local regex=tempest.api.object_storage 2026-03-27 01:37:52.701709 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-27 01:37:52.703302 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-27 01:37:52.705410 | orchestrator | + tee -a /opt/tempest/20260327-0137.log 2026-03-27 01:37:56.292988 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-27 01:37:56.293084 | orchestrator | Did you mean one of these? 2026-03-27 01:37:56.293098 | orchestrator | help 2026-03-27 01:37:56.293108 | orchestrator | init 2026-03-27 01:37:56.702017 | orchestrator | ok: Runtime: 0:01:57.269024 2026-03-27 01:37:56.715797 | 2026-03-27 01:37:56.715917 | TASK [Check prometheus alert status] 2026-03-27 01:37:57.252899 | orchestrator | skipping: Conditional result was False 2026-03-27 01:37:57.256375 | 2026-03-27 01:37:57.256569 | PLAY RECAP 2026-03-27 01:37:57.256717 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-27 01:37:57.256790 | 2026-03-27 01:37:57.484606 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-27 01:37:57.487874 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-27 01:37:58.307155 | 2026-03-27 01:37:58.307325 | PLAY [Post output play] 2026-03-27 01:37:58.324188 | 2026-03-27 01:37:58.324333 | LOOP [stage-output : Register sources] 2026-03-27 01:37:58.386525 | 2026-03-27 01:37:58.386757 | TASK [stage-output : Check sudo] 2026-03-27 01:37:59.305551 | orchestrator | sudo: a password is required 2026-03-27 01:37:59.425385 | orchestrator | ok: Runtime: 0:00:00.026432 2026-03-27 01:37:59.440768 | 2026-03-27 01:37:59.440924 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-27 01:37:59.482296 | 2026-03-27 01:37:59.482662 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-27 01:37:59.553913 | orchestrator | ok 2026-03-27 01:37:59.562260 | 2026-03-27 01:37:59.562390 | LOOP [stage-output : Ensure target folders exist] 2026-03-27 01:38:00.068330 | orchestrator | ok: "docs" 2026-03-27 01:38:00.068669 | 2026-03-27 01:38:00.349382 | orchestrator | ok: "artifacts" 2026-03-27 01:38:00.621483 | orchestrator | ok: "logs" 2026-03-27 01:38:00.649041 | 2026-03-27 01:38:00.649240 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-27 01:38:00.690427 | 2026-03-27 01:38:00.690737 | TASK [stage-output : Make all log files readable] 2026-03-27 01:38:00.992852 | orchestrator | ok 2026-03-27 01:38:01.001625 | 2026-03-27 01:38:01.001782 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-27 01:38:01.036139 | orchestrator | skipping: Conditional result was False 2026-03-27 01:38:01.053141 | 2026-03-27 01:38:01.053379 | TASK [stage-output : Discover log files for compression] 2026-03-27 01:38:01.080569 | orchestrator | skipping: Conditional result was False 2026-03-27 01:38:01.092051 | 2026-03-27 01:38:01.092257 | LOOP [stage-output : Archive everything from logs] 2026-03-27 01:38:01.149795 | 2026-03-27 01:38:01.149980 | PLAY [Post cleanup play] 2026-03-27 01:38:01.159358 | 2026-03-27 01:38:01.159475 | TASK [Set cloud fact (Zuul deployment)] 2026-03-27 01:38:01.220661 | orchestrator | ok 2026-03-27 01:38:01.229141 | 2026-03-27 01:38:01.229250 | TASK [Set cloud fact (local deployment)] 2026-03-27 01:38:01.252866 | orchestrator | skipping: Conditional result was False 2026-03-27 01:38:01.264692 | 2026-03-27 01:38:01.264816 | TASK [Clean the cloud environment] 2026-03-27 01:38:03.034929 | orchestrator | 2026-03-27 01:38:03 - clean up servers 2026-03-27 01:38:03.784475 | orchestrator | 2026-03-27 01:38:03 - testbed-manager 2026-03-27 01:38:03.875544 | orchestrator | 2026-03-27 01:38:03 - testbed-node-2 2026-03-27 01:38:03.965307 | orchestrator | 2026-03-27 01:38:03 - testbed-node-0 2026-03-27 01:38:04.053964 | orchestrator | 2026-03-27 01:38:04 - testbed-node-4 2026-03-27 01:38:04.150815 | orchestrator | 2026-03-27 01:38:04 - testbed-node-1 2026-03-27 01:38:04.245575 | orchestrator | 2026-03-27 01:38:04 - testbed-node-5 2026-03-27 01:38:04.336427 | orchestrator | 2026-03-27 01:38:04 - testbed-node-3 2026-03-27 01:38:04.425943 | orchestrator | 2026-03-27 01:38:04 - clean up keypairs 2026-03-27 01:38:04.447357 | orchestrator | 2026-03-27 01:38:04 - testbed 2026-03-27 01:38:04.480494 | orchestrator | 2026-03-27 01:38:04 - wait for servers to be gone 2026-03-27 01:38:15.396622 | orchestrator | 2026-03-27 01:38:15 - clean up ports 2026-03-27 01:38:15.587722 | orchestrator | 2026-03-27 01:38:15 - 21ae1198-6c13-4786-9c56-59267516f02a 2026-03-27 01:38:16.030159 | orchestrator | 2026-03-27 01:38:16 - 553abd29-85b5-4c6d-97e9-1bc5505f8182 2026-03-27 01:38:16.309659 | orchestrator | 2026-03-27 01:38:16 - 9567227f-df1f-4c80-a70d-3adad4166525 2026-03-27 01:38:16.575304 | orchestrator | 2026-03-27 01:38:16 - ab30bb2d-bcf8-4df6-89b7-377c51d6542c 2026-03-27 01:38:16.797913 | orchestrator | 2026-03-27 01:38:16 - c646f19c-f46a-47be-91e4-f179f1d2ebf1 2026-03-27 01:38:17.083148 | orchestrator | 2026-03-27 01:38:17 - da03f508-19f8-4cfa-9f1e-fbce672f9d2c 2026-03-27 01:38:17.325638 | orchestrator | 2026-03-27 01:38:17 - eb7a89db-1285-463c-9785-6f73d8a4e5c0 2026-03-27 01:38:17.551988 | orchestrator | 2026-03-27 01:38:17 - clean up volumes 2026-03-27 01:38:17.691271 | orchestrator | 2026-03-27 01:38:17 - testbed-volume-5-node-base 2026-03-27 01:38:17.735821 | orchestrator | 2026-03-27 01:38:17 - testbed-volume-1-node-base 2026-03-27 01:38:17.783707 | orchestrator | 2026-03-27 01:38:17 - testbed-volume-4-node-base 2026-03-27 01:38:17.833085 | orchestrator | 2026-03-27 01:38:17 - testbed-volume-3-node-base 2026-03-27 01:38:17.882081 | orchestrator | 2026-03-27 01:38:17 - testbed-volume-0-node-base 2026-03-27 01:38:17.927680 | orchestrator | 2026-03-27 01:38:17 - testbed-volume-2-node-base 2026-03-27 01:38:17.977443 | orchestrator | 2026-03-27 01:38:17 - testbed-volume-manager-base 2026-03-27 01:38:18.026900 | orchestrator | 2026-03-27 01:38:18 - testbed-volume-4-node-4 2026-03-27 01:38:18.075269 | orchestrator | 2026-03-27 01:38:18 - testbed-volume-6-node-3 2026-03-27 01:38:18.135743 | orchestrator | 2026-03-27 01:38:18 - testbed-volume-2-node-5 2026-03-27 01:38:18.181012 | orchestrator | 2026-03-27 01:38:18 - testbed-volume-3-node-3 2026-03-27 01:38:18.224900 | orchestrator | 2026-03-27 01:38:18 - testbed-volume-8-node-5 2026-03-27 01:38:18.272162 | orchestrator | 2026-03-27 01:38:18 - testbed-volume-5-node-5 2026-03-27 01:38:18.317101 | orchestrator | 2026-03-27 01:38:18 - testbed-volume-1-node-4 2026-03-27 01:38:18.362362 | orchestrator | 2026-03-27 01:38:18 - testbed-volume-7-node-4 2026-03-27 01:38:18.405098 | orchestrator | 2026-03-27 01:38:18 - testbed-volume-0-node-3 2026-03-27 01:38:18.449908 | orchestrator | 2026-03-27 01:38:18 - disconnect routers 2026-03-27 01:38:18.577907 | orchestrator | 2026-03-27 01:38:18 - testbed 2026-03-27 01:38:19.499728 | orchestrator | 2026-03-27 01:38:19 - clean up subnets 2026-03-27 01:38:19.566876 | orchestrator | 2026-03-27 01:38:19 - subnet-testbed-management 2026-03-27 01:38:19.749117 | orchestrator | 2026-03-27 01:38:19 - clean up networks 2026-03-27 01:38:19.893336 | orchestrator | 2026-03-27 01:38:19 - net-testbed-management 2026-03-27 01:38:20.220704 | orchestrator | 2026-03-27 01:38:20 - clean up security groups 2026-03-27 01:38:20.265012 | orchestrator | 2026-03-27 01:38:20 - testbed-node 2026-03-27 01:38:20.399206 | orchestrator | 2026-03-27 01:38:20 - testbed-management 2026-03-27 01:38:20.578448 | orchestrator | 2026-03-27 01:38:20 - clean up floating ips 2026-03-27 01:38:20.607549 | orchestrator | 2026-03-27 01:38:20 - 81.163.193.154 2026-03-27 01:38:20.944381 | orchestrator | 2026-03-27 01:38:20 - clean up routers 2026-03-27 01:38:21.047008 | orchestrator | 2026-03-27 01:38:21 - testbed 2026-03-27 01:38:21.992502 | orchestrator | ok: Runtime: 0:00:20.264521 2026-03-27 01:38:21.995432 | 2026-03-27 01:38:21.995558 | PLAY RECAP 2026-03-27 01:38:21.995639 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-27 01:38:21.995677 | 2026-03-27 01:38:22.148317 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-27 01:38:22.151197 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-27 01:38:22.917129 | 2026-03-27 01:38:22.917297 | PLAY [Cleanup play] 2026-03-27 01:38:22.933730 | 2026-03-27 01:38:22.933865 | TASK [Set cloud fact (Zuul deployment)] 2026-03-27 01:38:23.004302 | orchestrator | ok 2026-03-27 01:38:23.013808 | 2026-03-27 01:38:23.014013 | TASK [Set cloud fact (local deployment)] 2026-03-27 01:38:23.048985 | orchestrator | skipping: Conditional result was False 2026-03-27 01:38:23.064339 | 2026-03-27 01:38:23.064500 | TASK [Clean the cloud environment] 2026-03-27 01:38:24.315258 | orchestrator | 2026-03-27 01:38:24 - clean up servers 2026-03-27 01:38:24.854934 | orchestrator | 2026-03-27 01:38:24 - clean up keypairs 2026-03-27 01:38:24.873014 | orchestrator | 2026-03-27 01:38:24 - wait for servers to be gone 2026-03-27 01:38:24.921592 | orchestrator | 2026-03-27 01:38:24 - clean up ports 2026-03-27 01:38:25.008576 | orchestrator | 2026-03-27 01:38:25 - clean up volumes 2026-03-27 01:38:25.086629 | orchestrator | 2026-03-27 01:38:25 - disconnect routers 2026-03-27 01:38:25.111091 | orchestrator | 2026-03-27 01:38:25 - clean up subnets 2026-03-27 01:38:25.136928 | orchestrator | 2026-03-27 01:38:25 - clean up networks 2026-03-27 01:38:25.826321 | orchestrator | 2026-03-27 01:38:25 - clean up security groups 2026-03-27 01:38:25.863799 | orchestrator | 2026-03-27 01:38:25 - clean up floating ips 2026-03-27 01:38:25.899059 | orchestrator | 2026-03-27 01:38:25 - clean up routers 2026-03-27 01:38:26.115072 | orchestrator | ok: Runtime: 0:00:02.064059 2026-03-27 01:38:26.118812 | 2026-03-27 01:38:26.118987 | PLAY RECAP 2026-03-27 01:38:26.119117 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-27 01:38:26.119172 | 2026-03-27 01:38:26.247137 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-27 01:38:26.249887 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-27 01:38:27.042661 | 2026-03-27 01:38:27.042824 | PLAY [Base post-fetch] 2026-03-27 01:38:27.059096 | 2026-03-27 01:38:27.059231 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-27 01:38:27.114563 | orchestrator | skipping: Conditional result was False 2026-03-27 01:38:27.126470 | 2026-03-27 01:38:27.126682 | TASK [fetch-output : Set log path for single node] 2026-03-27 01:38:27.187121 | orchestrator | ok 2026-03-27 01:38:27.196608 | 2026-03-27 01:38:27.196751 | LOOP [fetch-output : Ensure local output dirs] 2026-03-27 01:38:27.712279 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/96cbe4b924ce41fb84664617445136cc/work/logs" 2026-03-27 01:38:27.987443 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/96cbe4b924ce41fb84664617445136cc/work/artifacts" 2026-03-27 01:38:28.262478 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/96cbe4b924ce41fb84664617445136cc/work/docs" 2026-03-27 01:38:28.287357 | 2026-03-27 01:38:28.287561 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-27 01:38:29.276824 | orchestrator | changed: .d..t...... ./ 2026-03-27 01:38:29.277162 | orchestrator | changed: All items complete 2026-03-27 01:38:29.277214 | 2026-03-27 01:38:30.008095 | orchestrator | changed: .d..t...... ./ 2026-03-27 01:38:30.770959 | orchestrator | changed: .d..t...... ./ 2026-03-27 01:38:30.797740 | 2026-03-27 01:38:30.797893 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-27 01:38:30.848315 | orchestrator | skipping: Conditional result was False 2026-03-27 01:38:30.858514 | orchestrator | skipping: Conditional result was False 2026-03-27 01:38:30.883341 | 2026-03-27 01:38:30.883504 | PLAY RECAP 2026-03-27 01:38:30.883605 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-27 01:38:30.883660 | 2026-03-27 01:38:31.012680 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-27 01:38:31.015479 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-27 01:38:31.835567 | 2026-03-27 01:38:31.835754 | PLAY [Base post] 2026-03-27 01:38:31.851505 | 2026-03-27 01:38:31.851667 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-27 01:38:32.947505 | orchestrator | changed 2026-03-27 01:38:32.956178 | 2026-03-27 01:38:32.956291 | PLAY RECAP 2026-03-27 01:38:32.956359 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-27 01:38:32.956425 | 2026-03-27 01:38:33.103424 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-27 01:38:33.105810 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-27 01:38:33.937857 | 2026-03-27 01:38:33.938059 | PLAY [Base post-logs] 2026-03-27 01:38:33.949352 | 2026-03-27 01:38:33.949508 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-27 01:38:34.405763 | localhost | changed 2026-03-27 01:38:34.416312 | 2026-03-27 01:38:34.416471 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-27 01:38:34.452519 | localhost | ok 2026-03-27 01:38:34.456080 | 2026-03-27 01:38:34.456196 | TASK [Set zuul-log-path fact] 2026-03-27 01:38:34.474299 | localhost | ok 2026-03-27 01:38:34.491571 | 2026-03-27 01:38:34.491723 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-27 01:38:34.529620 | localhost | ok 2026-03-27 01:38:34.536103 | 2026-03-27 01:38:34.536266 | TASK [upload-logs : Create log directories] 2026-03-27 01:38:35.035560 | localhost | changed 2026-03-27 01:38:35.040257 | 2026-03-27 01:38:35.040416 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-27 01:38:35.560509 | localhost -> localhost | ok: Runtime: 0:00:00.006831 2026-03-27 01:38:35.569205 | 2026-03-27 01:38:35.569392 | TASK [upload-logs : Upload logs to log server] 2026-03-27 01:38:36.139345 | localhost | Output suppressed because no_log was given 2026-03-27 01:38:36.143517 | 2026-03-27 01:38:36.143715 | LOOP [upload-logs : Compress console log and json output] 2026-03-27 01:38:36.204437 | localhost | skipping: Conditional result was False 2026-03-27 01:38:36.208059 | localhost | skipping: Conditional result was False 2026-03-27 01:38:36.219713 | 2026-03-27 01:38:36.219935 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-27 01:38:36.265851 | localhost | skipping: Conditional result was False 2026-03-27 01:38:36.266166 | 2026-03-27 01:38:36.271101 | localhost | skipping: Conditional result was False 2026-03-27 01:38:36.283218 | 2026-03-27 01:38:36.283462 | LOOP [upload-logs : Upload console log and json output]